I recently decided to start taking on the challenge of selfhosting and curating my music collection. I originally started looking at Lidarr as I am already a big fan of Radarr and Sonarr, but it wasn’t really what I was looking for. I’m not often seeking out full albums, and am more often finding my music by listening to single tracks from Spotify’s Discover Weekly playlist. I needed a solution that would let me replicate this experience while hosting my own MP3’s and ideally be entirely automated.
I currently have the following setup running on a VPS:
- Azuracast - This provides me a streaming radio station that cycles through my entire library 24/7
- Navidrome - This fills the gap of the Spotify-like interface where I can play specific tracks, albums, or playlists
I bootstrapped my library with a Python script that parsed a list of Spotify URL’s and downloaded all of the tracks with the spotdl library. This allowed me to grab my liked tracks, the playlists I had created, as well as a large number of albums I wanted.
I then used ChatGPT to write two python scripts:
-
The first script runs using cron every Monday and uses SpotDL to grab the contents of my Discover Weekly playlist from Spotify. It puts all of the files into a folder with that weeks date and also creates a playlist file. This way I can easily browse that weeks playlist in Navidrome and decide what to keep. It also sends me an email on completion/error
-
The second script is a bit more complex. This one does the same end result but for all of my LastFM reccomendations. This is done by spinning up a headless Chrome browser with Selenium in a docker container. It then logs into my LastFM account, parses each reccomendation, and then uses pytube to download the video links, since LastFM just directly links to Youtube videos. This list should change as I continue scrobbling via Navidrome and other sources, but I still need to determine how often the cron job should run.
My next step is figuring out how to connect to Azuracast/Navidrome using the many subsonic compatible clients so I can have mobile playback and things like offline playback. I’m currently looking at substreamer for Android.
I’d also like to look into a more seamless way of picking out the tracks I want to keep and discard from the playlists in Navidrome. I’m considering writing something to check its SQL database for liked tracks in each playlist and automatically move those into the main folder/playlist that Azuracast is playing from.
This whole setup took me only a couple days to create, and largely relied on ChatGPT to write the scripts and dockerfiles. I’m a capable programmer but GPT-4 is absolutely OP if you know what you’re trying to accomplish and how to debug its mistakes. That Selenium script only took me an hour from idea to completion and I never modified the code by hand, only prompted it for corrections/additions.
If anyone is interested I’ve uploaded all the scripts to a gist, you just need to go through and update with your credentials/URLs
Honestly just go for it, it’s pretty straightforward! I’d share my chat transcript but it at points contained things like my API keys.
I can however give some excerpts from the conversation:
This was actually my first time using the “You are a senior software engineer” bit, but I’ve heard a few people saying it works. I came across the idea for using Selenium from this prompt:
In fact here is the chat transcript for that one. Once I got to the end of this transcript I decided to try out the code. I realized selenium was using my installed browser and that wasn’t going to work once I moved this to a server. That was when I moved into a new chat that contains what became the final script, where I started the conversation with this prompt:
It was in this conversation that I learned about using the headless chrome container. Everything I did was a combination of prompting for additions and reading the documentation on what that was capable of.
I will regularly ditch a chat thread and take the output from a previous one into a new one, as it takes the previous context of the conversation into account for informing future generation, and sometimes I want to pivot or I want to focus in on a specific approach.
Once I had a more focused idea of what the tech stack was going to be it was just a matter of prompt what I needed, test, feed it back any errors and get corrections, notice something wrong (like it wasn’t appending .mp3) to the files, or something else I wanted to change, and prompt it in plain english.
There’s all kinds of people saying you should use X method and Y approach, but I find I get great results by just being clear and concise with what I’m looking for, as I would when speaking to another developer.
Thanks, I’ll definitely mess around with it soon!