top of page
Priyanshi Prasad

AI in Music: Is It Here to Stay?

With the unprecedented rise of artificial intelligence in various facets of our lives, from self-driving cars to virtual assistants, we are encumbered by questions about the manner and extent of influence we are willing to give this powerful instrument over our futures and the wider world. The music industry in particular is becoming the site for interaction with artificial intelligence in the form of AI-music production tools and virtual avatars, bringing forth a barrage of ethical and legal concerns.


With the likes of influential artists such as Drake, Lil Wayne and Selena Gomez having called out the growing presence of artificial intelligence in the music industry, it is evident that this is an impending reality that the industry will eventually need to negotiate with and navigate. The view on AI in music is skewed towards the negative, with most major artists claiming that it will take away jobs from music producers and sound mixers as well as singers and songwriters. Major players in the music industry, including Warner Music Group and Sony, have expressed interest in embracing and investing in artificial intelligence for music production, and the future of these careers remains uncertain. Within this argument is a fear of music production being completely computer-generated and divorced from the longstanding human-intensive process that we know it to be today.


AI softwares such as Revocalise and Speechify Voice Cloning are able to take voice samples of a person and reconstruct their voice to fit any spoken media. It brings up a slew of questions surrounding the ethics of imitating a person’s vocal likeness without their explicit consent in the name of pushing the envelope or going viral. A prime example is the song titled Heart on My Sleeve, created by TikTok user Ghostwriter997, which uses AI-generated vocals of artists Drake and The Weeknd. The song managed to garner over 6,00,000 streams on Spotify and 15 million views on TikTok, making it a viral hit. Invoking copyright infringement, Universal Music Group (UMG) was able to remove the song from streaming platforms due to its unauthorized use of these artists’ voice likenesses.


The song, however, is a minor portion of a growing trend of deepfake, AI-created songs that mimic the voices of well-known musicians performing versions of other artists’ songs. In the age of misinformation (incomplete and/or accurate), these covers travel far and wide, becoming viral despite having been created without the consent of the artists in question. Legality aside, deepfake songs may have a damaging effect on the professional and creative lives of artists in the industry. They may become wary of being imitated on songs that are divergent from their own styles, artistic visions, or personal beliefs.


Due to the lack of appropriate regulations in place to cope with the rise of artificial intelligence, music industry giants like UMG are having a difficult time navigating the rise of AI-generated songs that frequently mimic the style of artists they represent. In an interview with the BBC, Jani Ihalainen, an intellectual property lawyer in the UK, states, “Current legislation is nowhere near adequate to address deepfakes and the potential issues in terms of IP and other rights.”


Artificial intelligence in art is a double-edged sword. It is a threat to the music industry as much as it is an opportunity for innovation and the discovery of new creative avenues. As AI displays no signs of slowing down its permeation into art creation (see Midjourney and DALL-E), major music industry players are also looking to invest in using this technology to create AI-generated artists. This brings rise to a slew of ethical questions regarding cultural and racial appropriation. A case in point is the FN Meka Project, which was a virtual rapper signed to Capitol Records. The avatar was voiced by a human; however, its lyrics and melodies were partly AI-generated. While representing a black artist, music sung by the avatar heavily relied on racial stereotypes, and upon receiving widespread public backlash due to the lack of transparency over who was truly pioneering the art, Capitol Records dropped the project entirely.


However, to what extent does the aforementioned viewpoint hold true? A common misconception about AI music is that it is completely divorced from human intervention and is created virtually out of thin air by drawing from huge databases of previously-released music; similar points feature in conversations surrounding image and text-based AI-models. However, decisions over the instrumentation, key, mood, and BPM of a song are made by artists, and AI generates possibilities for where those ideas can go. It can also function as a creative jumping point for artists to source inspiration from. The case of Taryn Southern’s album I Am AI argues for the inclusion of this new technology in the process(es) of creating art.


The first of its kind, I Am AI was created using an open-source AI platform called Amper Music, with all chords, production work, and instrumentation being exclusively AI-generated. In an interview with CNN Tech, Southern explains that working with AI is like “having a new songwriting partner who doesn’t get tired and has this endless knowledge of music making. But I [sic.] feel like I get to own my vision; I iterate and choose what I like and don’t like.”


AI has the potential to change the landscape and social composition of the music industry. With big recording companies monopolizing professional music production, AI is making music creation more accessible with easy-to-use tools that can be utilised by anyone looking to be creative without the hassle of expensive equipment or other barriers to entry.


Whether the industry can embrace the new creative avenues made possible by AI while navigating the muddy waters of deepfakes and copyright infringements, only time will tell. Even The Beatles were able to appreciate and harness the technology in order to release their final song, Now and Again (Nov. 2023). The ability of AI music platforms to navigate and tread the line of inspiring creativity without infringing on artist rights or nearing appropriation can begin to be determined only through long-overdue productive discussions about their ethics, required regulations, as well as implications for the industry at large.


47 views1 comment

Related Posts

See All

1 Comment


Guest
Aug 27

Something to think further about, in terms of ethics, is that various different types of 'AI', which are conflated in this article, have wildly different ethical standards and use cases. Using a generative machine learning model to RECREATE a voice, or instruments based on having ingested music, or one vocalist's voice, is very immoral, whereas machine learning a chord pattern and suggesting likely following chords is just a mechanical version of a chord cheatsheet which have been around for ages. It's not even like the machine learning of generative AI. Even with machine learning, using it to isolate a vocal (as the Beatles did) may be sketchy in terms of training data, but is ethical in final use, because it's…

Like
bottom of page