The Beatles Were Then, AI Music is Now

When John Lennon and George Harrison died, so did the possibility of an iconic Beatles reunion. Unless, of course, they arose from their graves. But that was then, and this is now.

Diego Acevedo

Peter Jackson, the director of “The Get Back,” a documentary reviving footage of the Beatles’ “Let It Be” sessions, created a technology that uses machine learning to isolate Lennon’s vocals from a cassette tape demo to create the song “Now and Then.” Despite the unconventional methods of its creation, the song hit no. 1 this week on the Billboard Digital Song sales chart.

This foray into creating new music with AI has the potential to revolutionize mainstream music, but artists need to be protected from becoming just another soundbite.

AI music programs take the vocal and composer profiles and recognize patterns in sound waves to create a model based on these patterns. Though the software can replicate artists’ styles, it cannot replicate the creativity they put into their music. AI can only copy and predict from the data it already has.

On a more general-use level, even the average TikTok user can make popular singers and cartoon characters sing hit songs with AI-powered websites such as voicify.ai. For example, you could make Frank Sinatra sing Dua Lipa’s “Levitating,” or Sandy Cheeks from Spongebob belt out Carrie Underwood’s “Before He Cheats.”

Even industry leaders are jumping on the trend. YouTube recently released its Music AI Incubator where they will work with artists like Taylor Swift and Bad Bunny to combine AI and traditional music production. However, Bad Bunny has spoken out about AI-generated songs using his voice and is vehemently opposed to fans doing so with his music. YouTube CEO Neal Mohan has admitted that they do not have protections in place for the artists–though they might in the future. With even the general public creating AI music, the time to enact regulations is now.

However, there is the potential for positive change to arise from big corporations teaming up with music labels to explore generative AI. Finding a way for the creativity of artists to coexist with their voice data is essential to ensure artists are not taken advantage of.

Where does this leave the artists and composers when it seems like AI could take their expressive voice away?

Nobody knows for sure, especially when copyright laws are tricky to navigate in this unprecedented context. Yet, if there is no legal balance found between the rights of artists and AI users, creative authenticity and distinct messaging are threatened.

Some AI platforms, like OpenAI, are currently offering to cover any copyright infringement lawsuit costs its users might incur. Other corporations might follow this trend, but the fact remains that random TikTok users with no music production experience make Harry Styles sing “Baby Shark” without his consent and knowledge.

Without consent, this whole process is–ethically speaking–plagiarism.

These artists’ musical agency is stripped from them when they have no say in how their work is being used. On top of that, if music labels use this technology to their advantage, these artists may no longer have a leg to stand on when negotiating contracts, leaving them at an even bigger disadvantage.

The line between genuine personal entertainment and greedy copyright motives is blurred because of AI’s impact on the music industry. There will always be people using this software to make a profit, without being held accountable.

Legally, the future consequences of AI music remain uncertain, but we do know that it is here to stay–both a novel and daunting thought. Protecting these artists and composers from becoming obsolete is necessary. We need to push for laws protecting the works of musicians so we can all responsibly use this developing technology.

Nevertheless, the future of AI music is going to be a long and winding road.