Robot playing piano (Credit: Alamy)

How to make sweet sounding music with a hard drive

Artificial intelligence is being used by musicians to help compose melodies, write lyrics and even perform. It may only be a matter of time before a computer has a number one hit.

Asked how The Beatles approached songwriting, John Lennon quipped “on the M1 (motorway) – turn right, past London.” His songwriting partner, Paul McCartney described the process as more of a long and winding road, in which the pair looked for chord shapes and then worked out a melody as if they were “doing a crossword puzzle”.

Their collaborative approach to music-making produced hits that resonate decades after they were written. Their music has carved its influence into rock and pop, and influencing many of the bands that have followed them.

You might also like:
The chef making 120 burgers every hour
The aircraft designer who’s never flown
Will this technology end traffic jams?

But the next era-defining musical partnerships may look very different from the LSD-fuelled creativity of Lennon and McCartney. Songwriters are instead turning to machines to help them come up with chords and even pen lyrics for them. It is likely to change the way music is created forever.

The first computer-generated score, called the Illiac Suite was developed in 1957 by two researchers and the Illiac I computer at the University of Illinois at Urbana-Champaign. The ‘electronic brain’ simply generated random integers representing musical elements such as pitch and rhythm, which formed four movements. The piece caused “confusion bordering on hysteria” amongst music aficionados, one of whom attending the first performance, likened it to the sounds of a barnyard.

Today, however, a suite of AI software is capable of creating catchy new melodies in a matter of minutes, generating lyrics that tap into certain human emotions and even produce new sounds.

Taking inspiration from The Beatles, Sony CSL Research Laboratory’s Flow Machines project produced the world’s “first structured AI pop song” called Daddy’s Car. To write the song, the AI software suggested chords and sounds based on the original music of the Fab Four, but a human composer was required to arrange and produce the final song. It was released by the team in 2016 under the name Skygge, which is Danish for shadow. Its Hello World album has notched up five million streams, half of which were for the first single, Hello Shadow, featuring Canadian singer, Kiesza. BBC Culture described it earlier this year as possibly the first “good” AI album to be made.

And AI music is also finding its way into the charts. Music producer Alex da Kid’s track Not Easy, featuring Ambassadors, Elle King and Wiz Khalifa was a Top 40 hit in the Billboard Chart in 2016. It used IBM Watson, a computer system capable of answering questions posed in natural language, to read blogs, news articles, and social media to gauge emotional sentiments around topical themes. It also analysed the lyrics of the top 100 songs for each week over the previous five years.

With this data, Watson “arrived at an emotional fingerprint of culture,” according to IBM, which was used to help create the song’s simple lyrics. Alex Da Kid then used Beat – IBM’s AI music making software – to pick out musical elements that would be pleasing to the listener, meaning the AI partially wrote the hit song, or at least inspired parts of it.

The power of AI algorithms to crunch large amounts of data, analyse and produce unusual arrangements is helping to give human artists new ways of making music.

“The opportunities and ideas of how you could integrate this tech are endless,” says singer Taryn Southern, who posts her music on her YouTube channel. She recently created a single called Break Free with the help of AI tools including Amper, IBM Watson Beat and Google Magenta, which aims to use machine learning to create “compelling art and music”.

“In terms of process, I start by making a series of decisions about what BPM, rhythm, key, mood, instrumentation I want and then essentially giving the AI feedback each time it generates a new possibility,” she explains. “This back and forth continues until I’m happy with the overall song. I then download, arrange and mix the stems into a structure.”

For Southern, and others in the music industry, it is only a matter of time before there is a number one hit that has been written by a machine.

In many ways, the use of AI is merely an extension of what has been happening in the music industry for generations. Technology such as multitrack recording, for example, meant songs no longer needed to be recorded in a single take. David Bowie used bespoke software called The Verbasizer, which generated random sentences to help him create lyrics. It is now fairly normal for musicians to use Midi and virtual instruments on their tracks while audio processors are used to tinker with vocals.

Some of the earliest uses of AI in music were to impersonate musical styles. Created in 1987 by David Cope, a former professor of music at the University of California, Santa Cruz, an intelligent machine called Experiments in Musical Intelligence analyses a database of pieces from a particular musical style, extracting rules which it then applies to create a unique composition that fits within the genre. The structured, ruled-based composition of many types of classical music meant it was perfect for a computer to replicate. Cope’s software has emulated more than 1,000 pieces of music in styles of 39 classical composers, some of which have been commercially recorded.

The works have delighted, angered, provoked, and terrified those who have heard them, but overall, reactions have grown more positive over time

Cope says the works have delighted, angered, provoked, and terrified those who have heard them, but overall, reactions have grown more positive over time.

Since Cope’s early work there have been major advances in artificial intelligence research thanks largely to a field known as machine learning. By creating algorithms that, to some degree, replicate the behaviour of neurons in the brain, it has been possible to create AI networks that can analyse and learn from unstructured sets of data. This has allowed machines to unlock some of the secrets behind other complex forms of music, like folk music.

A “folk machine” created by researchers at KTH Royal Institute of Technology, Stockholm and Kingston University, London, has churned out a staggering 100,000 new folk tunes in just 14 hours. It is an output that dwarfs even the most prolific of human composers at it takes about half a second to generate one tune. The researchers used an off-the-shelf AI method called a recurrent neural network (RNN), a form of machine learning that essentially predicts what comes next based on what it has previously seen, and fed it 25,000 traditional Celtic and English folk songs collected from a website to train their software.

“The resulting computer models show some ability to repeat and vary patterns in ways that are characteristic of this kind of music,” says Bob Sturm, who led the project at Queen Mary University and is now an associate professor of computer science at KTH Royal Institute of Technology in Stockholm, Sweden. “It was not programmed to do this using rules – it learned to do so because these patterns exist in the data we fed it.”

Surprisingly, despite the enormous output of folk music compositions from the folk machine, about one in five are “actually fairly good”

Surprisingly, despite the enormous output of folk music compositions from the folk machine, about one in five are “actually fairly good”, according to professional musicians who were asked to look at a sample of 3,000 of the tunes. Sturm says that the less rigid nature of folk music – where performers use compositions as a template to elaborate upon – may have been well suited to this sort of AI music generation.

“The musicians found interesting features and some patterns that are unusual but work well within the style,” he says.

Sturm and his colleagues have also challenged a group of traditional musicians to create an album of AI folk music in an attempt to test how plausible this approach to folk music composition can be. The result can be listened to by anyone online while the researchers have also placed a version of their algorithm online so anyone can create their own AI folk songs.

Ultimately, however, Sturm hopes that an ensemble of robots will one day perform the computer-generated tunes all by themselves.

Performances by machines may indeed be closer than many people think. A beatboxer and experimental vocalist called Reeps One, whose real name is Harry Yeff, has been working with CJ Carr, a deep learning expert, to train a computer to perform verbal tricks by feeding it hours of himself beatboxing. The result is what he calls a “second self” that he can interact with and has used to create a ground-breaking composition where he has a beatboxing “conversation” with the machine. The AI, which he created in a collaboration with Nokia Bell Labs, has even produced new sounds that Yeff has then taught himself to replicate.

“We’re able to create an echo chamber that leads to something new,” he says.

Google’s DeepMind team is working to take this concept even further with a new project called WaveNet, which will create “a deep generative model of raw audio waveforms… able to generate speech which mimics any human voice”. Using a network of algorithms that is modelled on the human brain, it takes in audio and can then push it out in new forms. It raises the possibility of AI not only writing music and lyrics but also singing them.

They fed a language processing algorithm with a vast collection of rap and hip hop lyrics to teach it come up with its own

Already developments in a field known as natural language generation are producing machines capable of writing convincing looking lyrics. Researchers at the University of Antwerp and the Meertens Institute in Amsterdam have created a rap song generator they have called Deep Flow. They fed a language processing algorithm with a vast collection of rap and hip hop lyrics to teach it come up with its own. The results are foul-mouthed but realistic looking lyrics. So much so that the research team created an online game that challenges rap fans to distinguish between real hip hop lyrics and those spat out by their machine.

Folgert Karsdorp, a researcher at the Meertens Institute who is involved in the project, says it is only currently possible to generate a few lines of convincing lyrics and that repetition is usually the giveaway.

“As soon as you start generating longer text, like entire songs, the coherence gets lost. You could say that these models have the memory of a goldfish,” he says.

In another dark corner of the musical spectrum, CJ Carr and Zack Zukowski, who together are the Dadabots, are using AI software to generate new black metal tracks. They train their machine on the raw acoustic waveforms of metal albums. As it listens, it tries to guess what will come in the next fraction of a millisecond, playing this ‘game’ millions of times to come up with a tune that the duo sometimes layer into atmospheric compositions.

“Different influences from the music fuse together so you get a cluster of sounds blending to create a weird hybrid crossover,” says Zukowski. In the case of metal, this means screaming and guitars, which sound new and yet familiar at the same time. Although the technology is in its infancy, the pair believe the next-generation of music bots will be simpler to use and able to create music in real-time, mixing influences to produce something unique but on demand. Carr believes this could be the ultimate in machine-human music collaboration and blur the lines between composers and consumers.

It is a sign that something fundamental is about to change in the way we consume music.

“We are entering an era of hyper-personalisation, in which consumers expect services tailored to their own tastes,” says Geoff Taylor, chief executive of the British Phonographic Industry (BPI). Apple Music, Deezer and Spotify all use AI to analyse users’ behaviour and suggest new tracks listeners might like. A new strain of emerging AI technology also makes use of contextual data, according to the BPI’s recent “Music’s Smart Future” report. For example, Google Play can use information like location, activity, and the weather to try to provide the right song at the right time.

Combining this ability to learn about consumers with AI composition is also leading to some worrying trends for many musicians who are already suffering at the shift to music streaming services. Rather than playing tracks produced by musicians, streaming services could have their own AI music bots churn out music note by note to each customers’s taste.

Amazon’s Echo smartspeakers already have a DeepMusic function that allows consumers to generate their own tunes and play them instantly in your home. The technology is arguably blurring the lines between who is an artist and a listener.

Elsewhere computers are slowly taking over the composition of background music or muzak, defined by some as ‘functional’ music, again helping to circumvent difficulties that can lie in navigating rights and royalty payments. Jukedeck has taught its AI the elements of composition so its software can produce original music note by note.

“We train these neural networks with existing examples of music and they pick up the features of these examples and learn how to make them their own,” says Ed Newton-Rex, founder of the London-based company. Jukedeck doesn’t aim to emulate existing composers, but instead its software generates personalised music, allowing YouTube video makers to shape a melody to their video so they get a unique soundtrack. Similarly, another piece of software called Amper has been built as a collaborative tool for humans to put their own mark on computer-generated tracks for videos, but it also allows them to create music simply by choosing a mood, style and how long they would like the piece of music to last.

“You can give it feedback – what you liked, didn’t like, what you’d like to change, and you can get a revision of that music that’s been created for you,” says Drew Silverstein, co-founder of Amper Music.

For amateur film and music makers, this technology could help to open up the creative industries to them. The BPI’s report into the future of music found two of the biggest pain points in any creative process are time and cost, and AI can help to “significantly” reduce both of these.

But all is not lost for those who still value the human touch on their music – AI is also speeding up the process of discovering new talent. A British company called Instrumental, has developed software that crunches data from Spotify, YouTube, Instagram and other platforms to identify the next generation of hit musicians who are uploading their own music. The company then signs them on development deals to help them grow. Music producer Alex da Kid is also using AI-powered searches to look through the emotional data for a huge number of artists on Spotify to identify singers he might be able to collaborate with.

Mike Kestemont, an assistant professor at the University of Antwerp who was also involved in the Deep Flow rap machine, believes that humans will remain an essential part of the artistic process partly because of controversies about whether something produced by a machine can be considered art.

“Many people say that isn’t possible because art is social and something that happens between people,” he says. “So if you have to recognise the authorship of a machine, you’d also have to recognise in part machines are a part of human society, which is one leap too far for many people.”

Newton-Rex also doubts that AI will ever be able to emulate the creative genius of some human artists.  

“I think there are elements of Bowie and Bach that may well be untouchable by a computer,” he says. Paul McCartney, for example, has also said in the past that the song-writing partnership he had with John Lennon is “impossible” to replicate because of the duo’s close relationship as teenagers.

It is this humanity in the songs of The Beatles and other musicians that so many are sceptical machines will ever be able to achieve. Afterall, our species is thought to have been embracing music for some 50,000 years. Its power taps into something deep within our brains.

Reassuringly, even if there is an AI equivalent of Mozart or McCartney out there, it is doubtful it would recognise its own genius, says Kestemont. That is for human listeners to do, and while we hold this power, our own creativity is sacred.

But in the words of The Beatles, tomorrow never knows.

Join 900,000+ Future fans by liking us on Facebook, or follow us on Twitter or Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Culture, Capital, and Travel, delivered to your inbox every Friday.