Tirhakah Love 

Do androids dream of electric beats? How AI is changing music for good

Breakthroughs in artificial intelligence make music composition easier than ever – because a machine is doing half the work. Could computers soon go it alone?
  
  

Replicator, by AI music pioneer Rama Allen.
Replicator, by AI music pioneer Rama Allen. Photograph: Reeps One

The first testing sessions for SampleRNN – an artificially intelligent software originally developed by machine-learning researcher Dr. Soroush Mehri and expanded by Carr and Zukowski, aka the Dadabots – sounded more like a screamo gig than a machine-learning experiment. Carr and Zukowski hoped their program could generate full-length black metal and math rock albums by feeding it small chunks of sound. The first trial consisted of encoding and entering in a few Nirvana a cappellas. “When it produced its first output,” Carr tells me over email, “I was expecting to hear silence or noise because of an error we made, or else some semblance of singing. But no. The first thing it did was scream about Jesus. We looked at each other like, ‘What the fuck?’” But while the platform could convert Cobain’s grizzled pining into bizarre testimonies to the goodness of the Lord, it couldn’t keep a steady rhythm, much less create a coherent song.

Artificial intelligence is already used in music by streaming services such as Spotify, which scan what we listen to so they can better recommend what we might enjoy next. But AI is increasingly being asked to compose music itself – and this is the problem confronting many more computer scientists besides Dadabots.

Musicians – popular, experimental and otherwise – have been using AI to varying degrees over the last three decades. Pop’s chief theoretician, Brian Eno, used it not only to create new endlessly perpetuating music on his recent album Reflection but to render an entire visual experience in 2016’s The Ship. The arrangements on Mexican composer Ivan Paz’s album Visions of Space, which sounds a bit like an intergalactic traffic jam, were done by algorithms he created himself. Most recently, producer Baauer – who topped the US charts in 2012 with his viral track Harlem Shake – made Hate Me with Lil Miquela, an artificial digital Instagram avatar. The next step for synthetic beings like these is to create music on their own – that is, if they can get the software to shut up about Jesus.

The first computer-generated score, a string quartet called the Illiac Suite, was developed in 1957 by Lejaren Hiller, and was met with massive controversy among the classical community. Composers at the time were intensely purist. “Most musicians, academic or composers, have always held this idea that the creation of music is innately human,” Californian music professor David Cope explains. “Somehow the computer program was a threat to that unique human aspect of creation.” Fast forward to 1980, and after an insufferable bout of composer’s block, Cope began building a computer that could read music from a database written in numerical code. Seven years later, he’d created Emi (Experiments in Musical Intelligence, pronounced “Emmy”). Cope would compose a piece of music and pass it along to his staff to transcribe the notation into code for Emi to analyse. After many hours of digestion, Emi would spit out an entirely new composition written in code that Cope’s staff would re-transcribe on to staves. Emi could respond not just to Cope’s music, but take in the sounds of Bach, Mozart and other classical music staples and conjure a piece that could fit their compositional style. In the nearly 40 years since, this foundational process has been improved in all manner of ways.

YouTube singing sensation Taryn Southern has constructed an LP composed and produced completely by AI using a reworking of Cope’s methods. On her album I AM AI, Southern uses an open source AI platform called Amper to input preferences such as genre, instrumentation, key and beats per minute. Amper is an artificially intelligent music composer founded by film composers Drew Silverstein, Sam Estes and Michael Hobe: it takes commands such as “moody pop” or “modern classical” and creates mostly coherent records that match in tone. From there, an artist can choose to select specific changes in melody, rhythm instrumentation and more.

Southern, who says she “doesn’t have a traditional music background,” sometimes rejects as many as 30 versions of each song generated by Amper from her parameters; once Amper creates something she likes the sound of, she exports it to GarageBand, arranges what the program has come up with and adds lyrics. Southern’s DIY model foretells a future of musicians making music with AI on their personal computers. “As an artist,” she says, “if you have a barrier to entry, like whether costs are prohibiting you to make something or not having a team, you kind of hack your way into figuring it out.” Her persistence in low-cost collaboration opened up partnerships with Samsung, Google and HTC, and she is now working on ways to make AI and VR art more accessible.

AI isn’t just a useful tool, though – it can be used to explore vital questions about the nature of human expression. This self-reflective impulse epitomises the ethic of New York’s art-tech collective the Mill. “The overarching theme of my work,” explains creative director Rama Allen, “is playing with the concept of the ‘ghost in the machine’: the ghost being the human spirit and the machine being whatever advanced technology we try to apply. I’m interested in the collaboration between the two and the unexpected results that can come from it.” This is the central theme behind the Mill’s musical AI project See Sound, which debuted at last year’s SXSW festival in Austin – a highly reactive sound-sculpture program engineered by the human voice. Hum, sing or rap and See Sound etches a digital sculpture from your vocals on its colourful interface. From there, Allen and his team 3D-print the brand new shape. The sculptures themselves are, Allen says, “like a fingerprint”.

The project was born when Allen saw London beatboxer Reeps One live and began to think about the reciprocal relationship between his machine mimicry and music. “I thought, what if I could create another piece of technology that allows him to visualise a shape in his mind, and breathe that shape into existence?” See Sound is one of the most intriguing examples of the ways artists and AI can learn from one another to create something new – advancing the idea of data-driven input into a more reciprocal relationship between man and machine.

But while Allen, Southern and Dadabots exemplify an ideal harmony, the reality is that artificial intelligence is an industry, and the utopian synergy of the experimenters’ projects will undoubtedly give way to manipulation – even outright exploitation – by commerce. Spotify’s brilliant machine-learning program delivers recommendations to its listeners, and the company hired AI savant François Pachet to further the company’s entrenchment in the field. Pachet worked on Sony’s Flow Machines, a program that uses AI to compose pop songs. Spotify has already been accused of pushing fake artists in its playlists – mere pseudonyms for conveyor-belt pop made by a team of shadowy producers to evade royalty payouts, a charge it denies – so will they begin to produce their own AI-developed music? Amazon’s AI assistant Alexa features a new skill called DeepMusic which interweaves layers of audio samples to match the vibe set by users. And while the tunes sound too much like a computer drummed them up, that Amazon can already pipe music developed by AI into homes makes for a murky interplay between corporation, in-home AI assistant and creator.

As the Guardian reported last year, startups such as AI Music are working on tools that “shape shift” existing songs to match the context in which they’re played; such as switching up the rhythms while driving or increasing the bass as the listener is jogging. Patrick Stobbs, co-founder of AI music composer Jukedeck, recently mentioned that the program would move from tweaking music to actually synthesising it. The London-based company, whose customer base requires what has been described as “functional music”, has already produced work for the likes of Coca-Cola and Google. You can imagine a future where not only hold music is produced by AI, but also music for adverts, TV series and unobtrusive dinner party soundtracks; as well as the blue- and increasingly white-collar workers whose jobs are being replaced by AI, composers could be under threat too.

But artists who use AI are steadfast that their work aims to augment the lives of artists and non-artists alike, not replace them. “At this point it’s more surprising to us to hear what humans will do with it,” Carr of Dadabots tells me. “Everything we’re doing is one big scheme to collaborate with bands we love.”

An AI-assisted future raises questions around existing inequalities, corporate domination, and artistic integrity: how can we thrive in a world of automation and AI-assisted work without exacerbating the social and economic schisms that have persisted for centuries? It’s likely we won’t. But in the most utopian vision, music will be the first foray into machine-learning for many people, allowing collaboration that edifies the listener, the musician and the machine.

 

Leave a Comment

Required fields are marked *

*

*