Thursday, 22 May 2025
Emma Garland
15 minute read
Have you ever woken up from a dream with a song in your head, only for it to slip out of reach?
Paul McCartney famously dreamt up “Yesterday" for The Beatles, as did Jimi Hendrix with “Purple Haze”. They and others managed to craft, publish and release these songs to our pleasure. The inspiration for musical pieces comes from many different places - be it confessional, strumming a few chords and layering on tracks, or just letting some notes evolve. Humans love tunes and have created music for as long as we can find evidence for. When we enjoy music, our brain's reward and motivation centres light up (Harvard, 2024).
My own musical journey is that I’m a hobbyist - a below average drummer, a noob keyboard player who can strum a few guitar chords and sing along. I also love creative writing but have never dabbled in the realms of writing and publishing entire songs. For many people, song writing is hard to achieve, and I have upmost admiration for those who can just create, publish and play live a song they have hand-crafted. If you’re lucky, you may have a bunch of jamming buddies who you can collaborate with. But what if none of you have time to practice anymore?
Years ago, in AI terms (2021) I did a brown bag lunch talk called “Writing Music with AI”. At the time, the publicly available tools for creation were basic and the output was maybe a cute midi file but nothing sophisticated. Even so, it was clear that AI had potential to become a virtual jamming buddy, to bounce riffs off and unlock creative concepts. At the time I asked, “Could AI help me write an awesome hit song?”. Fast-forward to May 2025 and AI is now in the realm of enabling extremely impressive and sophisticated music generation.
But let’s just pause. Computer generated music goes back further than you might think! In 1957, the first score was composed by a computer, "String Quartet No. 4" or Illiac Suite. In 2016, the first pop song ever written by AI was published by Sony, “Daddy’s Car”. It is clear that music generation has progressed incredibly fast from that output to where we are now, in 2025.
Other music generation tools are available (Udio, BandLab's Song Starter and many more), but Suno is my favourite and I am constantly publishing or remastering new songs to my AI alter ego, Lucidshadowman.
Suno lets you produce an idea in one sentence describing a song theme and style. The UI is friendly and easy to start with, in a similar vein to AI app builder Lovable.dev
Generate and you gain two AI baked songs. You can listen and tweak, customise the prompts, regenerate. Whilst you can generate lyrics with it, I usually write the lyrics myself, and it has unlocked the ability to hear my words in a song tailored to a specific style or voice.
The songs it generates will impress you, and you will want to tinker. You can finely tune the style, such as "Confessional, female, sadcore" or “Spanish classical guitar,” any genre you could imagine. There is ever more functionality to edit songs, replace sections, remaster to a newer model, and even do a cover of a song.
You can also develop artist personas, something I'm leaning into more as I find unique or pure voices, like "Sad sixties singer" who you can send to sing a cover of your AI song in a dingy bar somewhere.
There is also an Android and iOS app, although I just use the Suno website for more functionality. I've even started to add song cover art with OpenAI’s video generator, Sora. My friend has combined music and videos with an AI music/video project called Eternity Arcade and I even have the t-shirt. Eternity Arcade produce retrowave synth-pop music; their YouTube channel gets many views and my own AI persona has even collaborated with them on a song. So perhaps we can still have jamming buddies after all?
It's not a one click process to create a Suno song, and as always with AI, things come at a cost. Music generation tools have had multi-million dollar funding rounds. Japan-based Amadeus Code (one of the first AI music tools I used) raised $1.8 million in 2019. UK startup JukeDeck raised £2.5 million in 2019 and has since been acquired by TikTok. And Suno announced in 2024 that they had raised $125 million.
I usually pay monthly for Suno. You also get a lot more prompts - as always, it's very easy to burn through credits with tweaks to the AI generated output. I've never planned to sell a song but if I wanted to, I'd need to be on the paid Suno version to be able to use it commercially.
And here we get into the awkwardness of commercials and ownership. Suno and Udio are being sued for alleged copyright violation.
What are the legal implications of training on artists published songs? Similarly to creative works of writing, art, and code, it could be argued that consent to where peoples work ended up was not explicitly agreed when the work was published. And from the hearing, it sounds like the complaint is focused on the large body of music files Suno and other tools would have been trained on. In fact, returning to Paul McCartney, whilst he and Ringo Starr have used AI to enable a final Beatles song, he has joined a number of voices speaking out against UK copyright law proposals (BBC, 2024). These proposals would allow AI companies to use online material for data mining without being concerned with copyright. And as it happens, Paul McCartney had originally wondered if he had accidentally plagiarised Yesterday from someone else.
I think AI is great, and it can do lots of great things…[but] it shouldn't rip creative people off. There's no sense in that.
Some argue that as humans we are influenced subconsciously by music we’ve already heard. Artists have been sued for their songs sounding similar before, like Ed Sheeran, who was found not liable in 2023 for infringing copyright on “Let’s Get it On” by Marvin Gaye. But AI is inputting big data and training at scale to create content. Is that a fair comparison? And what if you release a hit single driven by data loaded from thousands of other songs. Who should get a cut of the royalties?
Considering how much I love Suno, I have a conflicting ethical concern about this. I would love to be generating music on a model that was trained on a bucket of data that all the artists were fully in approval of how this would be used in future. And I would never expect it to fully replace live, human musicians. We can only imagine how much AI will be baked into future music production tools for big name artists, too.
I have always been on the side that generative AI is a creativity tool, rather than a direct replacement for artists. As I say in my (now rather aged) 2024 24 Days in Umbraco article on developers and AI, it is like rolling story dice - symbols like “glasses, “egg”, “emerald necklace” triggering inspiration for a story.
After year of Suno music generation experiments I'm still completely in this camp. By pulling the fruit machine handle, you might luck out and mine a gem of a song… or you might get some strange triple voiced echo with weird outtakes at the end. I have a whole playlist of random songs featuring creepy laugh-crying at the end of a song, or a full spoken sentence inserted as a surprise. Saying that, with each Suno model update things have been getting better.
I'm wary of big model changes, especially with ChatGPT’s overly sycophantic model recently being rolled back. If I'm happy with a model I don't usually want it to change, but at least Suno lets you choose the model to generate. The increase in audio quality was worth the recent update to Suno model 4.5.
We don't get to know the intimate technical details of the main Suno model. They do have an open source transformer-based text-to-speech model called Bark, available under the MIT license. We can understand from the Lightspeed Venture Partners interview with Suno's CEO Mikey Shulman that Suno is using at least a combination of transformer and diffusion models.
But I should emphasise that while you can generate songs with AI in seconds, it can take hours of crafting to produce the final output that you want; a song that makes you feel something.
Can AI identify and generate chill-inducing melodies? Futurist Matthew Griffin’s 2017 article about Sony’s AI music generation tool “Flow Machines” contains some insightful science about the art of making music. Listening to intense music that gives us chills releases dopamine. This anticipation of emotional peaks creates a reward loop:
Now, my own experience isn’t a scientific study, but I can confirm that I have generated some Suno music that has given me chills. However, I have had more success with the typical repetition pattern in pop/rock versus experimenting with less widely known musical genres, as this 2018 article by AIVA explores. I feel like I know when a song hits the right emotions, feels catchy and pleasing and just “works.” A human touch is still needed to collaborate with the AI, tweak the outputs, and hand-curate into a quality production.
I’m a developer with a love for creating, and AI unlocks that for me. I listen to Suno alongside Spotify now... except when I need Frozen songs for my daughter Amelia. Saying that, I've even generated children's songs. Having a 3 year old helping me develop the lyrics has been insightful! Turns out "Trains are so choo choo". She genuinely loves the generated songs, asking for them over and over. They are catchier than Cocomelon, and hearing her name in the lyrics is delightful for her. I am presuming I'm not feeding her any junk food by playing AI generated songs; if anything, it is helping her realise her creative power. But the AI songs do have a certain catchiness that might make a parent uneasy...
Suno lets you generate music without needing to spin up a full band or even use a computer or get out of bed. It was my saviour during my son’s newborn era last year and I'm sure it's a joy to enable people who wouldn't normally have the time, resources, or ability to create entire songs. As Suno say, they wanted to create “a future where anyone can make music”. Suno has given life to lyrics I'd buried away for years. I don't need to rely on others for turning up to band practice. But I do need online access, paid credits, and time.
A future where anyone can make music
I'll likely never feel the vibe of evolving my song during jam sessions with band members, but this feels closer for me than I would have ever been. I have an exciting idea of hearing one of my own Suno songs played by a live flesh and blood band one day… although I expect seeing my songs play via a holographic version of a band I’ve generated is more likely.
Last updated: Friday, 23 May 2025
Head of Umbraco Web Development
She/her
Emma heads up our Umbraco Web Development team. She is a software engineer and multiple Umbraco MVP at Rock Solid Knowledge, with an interest in AI integrations.
We're proud to be a Certified B Corporation, meeting the highest standards of social and environmental impact.
+44 333 939 8119