When Pharrell Williams accepted five Grammy Awards this year on behalf of the French group Daft Punk, the duo were dressed as robots. This may have foreshadowed a coming invasion by real music robots from France.
Computer scientists in Paris and the U.S. are working on algorithms enabling computers to make up original fugues in the style of Bach, improvise jazz solos a la John Coltrane, or mash up the two into a hybrid never heard before.
“We are quite close now to [programming computers to] generate nice melodies in the style of pop composers such as Legrand or McCartney,” says Francois Pachet, who heads Sony’s Computer Science Lab in Paris.
The commercial applications of such efforts may include endless streams of original music in shopping malls that can respond to crying babies with soothing harmonies, as well as time-saving tools for busy composers. But the questions raised by computerized composition are more abstract—touching on the nature of music, art, emotion, and, well, humanity.
The music-bots analyze works by flesh-and-blood composers and then synthesize original output with many of the same distinguishing characteristics. “Every work of music contains a set of instructions for creating different but highly related replications of itself,” says David Cope, a computer scientist, composer, and author who began his “Experiments in Musical Intelligence” in 1981 as the result of a composer's block.
“It’s truly impressive,” says jazz guitarist Pat Metheny, commenting on a track by a jazz-bot programmed by Pachet’s team to sound like sax legend Charlie “Bird” Parker blended with French composer Pierre Boulez. “I sent it to Chris Potter, the saxophone player in the band I am touring with right now, and asked him who the player was. He immediately started guessing people.”
The French robot that mashes up Parker and Boulez is a lot more advanced than most efforts at computer-penned music. For instance, another jazz-bot emulates Bill Evans with mixed results. Known for his heavenly flights of pianistic virtuosity, often while doped up on heroin, the classically trained Evans defined Cool Jazz on Miles Davis’s “Kind of Blue” outing, the most popular jazz album ever. Sony’s Evans-bot sounds more like it’s doped up on a cocktail of Thorazine and Windows 8. The lush chordings and rush of arpeggios are trademark Evans, but the ham-fisted dynamics and pointless melodies reveal no one human is home.
In 1950, World War II code-breaker and forefather of artificial intelligence Alan Turing introduced a blindfold test to see whether computers could fool humans into believing they were communicating with other humans (“humans” who were actually computers). The test would determine, essentially, whether computers can “think.”
But can they swing? “I would submit that you can certainly make a computer swing,” says Brooklyn-based musician and technologist Eric Singer. “You can kind of jitter that swing a bit to make it sound more human. “
Singer helped devise a computerized band called the “Orchestrion” that Metheny recorded and toured with in lieu of live musicians in 2010. The Orchestrion (also called a Panharmonica) was reportedly invented in 1805 by musician (and, some said, swindler) Johann Nepomuk Maelzel. Beethoven, a fan of early music tech, featured Maelzel’s musical automatons—powered by a bellows—in between symphonies at concerts in 1813.
David Cope has designed EMMY, an emulator named for the acronym of Cope’s “Experiments in Musical Intelligence” project at UC Santa Cruz and elsewhere. EMMY spools out miles of convincing music: from Bach chorale to Mozart sonata to Chopin mazurka, Joplin Rag, and even a work in the style of her creator, Cope.
Ray Kurzweil, pioneer of music synthesizers and author of “The Age of Spiritual Machines” is quoted on Cope’s website voicing the question inevitably raised by projects like EMMY: "When Cope's program writes a delightful turn of musical phrase, who is the artist: the composer being emulated, Cope's software, or David Cope himself?”
Metheny thinks he has the answer, and it’s flattering to humankind. “Instead of thinking of it as computer-generated music,” he says, “I tend to think more along the lines of ‘computer assisted,’ since whoever writes the code or whichever user sets the parameters is already going to be making many of the decisions about what the result might be like.”
Authorship questions aside, the rise of robots may have less-than-happy results for some humans. When Cope was getting started in the early 1980s, musicians’ unions were decrying another perceived foe of the working musician: the synthesizer. It was fast replacing horn sections and even whole bands on stages and studios from Hollywood to Hong Kong.
Today the synth is a staple of most genres of recorded music. But the solo synth player who replaced a live band at a lounge in Las Vegas 30 years ago, is existentially threatened by a DJ with a pre-recorded set of tracks on a MacBook.
So when the robots come for the DJs, who will speak out for them?
Maybe here's who: the enraged dance music fans who reacted to a parody article that went viral earlier this year about a robot DJ taking over a dance club. One defender of human DJs—who didn’t get the joke—commented “F**k you … a robot will never be able to read a [crowd] and play to a [crowd]… Who ever is creating this needs to be shot or worst … Robots will never fully be able to compare to a true entertainer!!!!”
A DJ’s limited duties at a live show—mostly keeping levels right and matching beat tempos between pre-selected tracks—was fodder for Andy Samberg’s SNL send-up called “When Will The Bass Drop?” “Davinciii”, a lampoon of superstar Avicii, looks to the crowd like he’s working the sounds behind the deck, but he’s really playing with model trains, frying an egg, painting a self portrait, and anything else to stave off the boredom.
In reality, the dance music genre is mediated, if not ruled, by machines. “The power and sheer volume would not be possible without computers,” says Quilla, a composer and “topline” vocalist for Tiësto and other über DJs. “So one could argue that computers are already wildly succeeding in moving people's souls: making them laugh, cry, throw themselves down on the ground and thrash around in pools of other people's sweat.”
If a fist-pumping robot can rock a party at the Las Vegas nightclub Hakkasan, does that mean a computer can compose a symphony that brings tears to human eyes?
“I’m sure there are people who cry to Taylor Swift; I’m sure there are people who would listen to Rachmaninoff and be like, ‘dude, that is so boring,’” Singer says. “We can go deeper on this, like do humans actually have emotions or is it all just chemical and electrical brain impulses that are very complex?”
Read When Robots Write Songs on theatlantic.com
More From The Atlantic