Can computers be creative? That’s a question bordering on the philosophical, but artificial intelligence (AI) can certainly make music and artwork that people find pleasing. Last year, Google launched Magenta, a research project aimed at pushing the limits of what AI can do in the arts. Science spoke with Douglas Eck, the team’s lead in San Francisco, California, about the past, present, and future of creative AI. This interview has been edited for brevity and clarity.
Q: How does Magenta compose music?
A: Learning is the key. We’re not spending any effort on classical AI approaches, which build intelligence using rules. We’ve tried lots of different machine-learning techniques, including recurrent neural networks, convolutional neural networks, variational methods, adversarial training methods, and reinforcement learning. Explaining all of those buzzwords is too much for a short answer. What I can say is that they’re all different techniques for learning by example to generate something new.
Q: What examples does Magenta learn from?
A: We trained the NSynth algorithm, which uses neural networks to synthesize new sounds, on notes generated by different instruments. The SketchRNN algorithm was trained on millions of drawings from our Quick, Draw! game. Our most recent music algorithm, Performance RNN was trained on classical piano performances captured on a modern player piano. I’d like musicians to be able to easily train models on their own musical creations, then have fun with the resulting music, further improving it.
Q: How has computer composition changed over the years?
A: Currently the focus is on algorithms which learn by example, i.e., machine learning, instead of using hard-coded rules. I also think there’s been increased focus on using computers as assistants for human creativity rather than as a replacement technology, such as our work and Sony’s “Daddy’s Car” [a computer-composed song inspired by The Beatles and fleshed out by a human producer].
Q: Do the results of computer-generated music ever surprise you?
A: Yeah. All the time. I was really surprised at how expressive the short compositions were from Ian Simon and Sageev Oore’s recent Performance RNN algorithm. Because they trained on real performances captured in MIDI on Disklavier pianos, their model was able to generate sequences with realistic timing and dynamics.
Q: What else is Magenta doing?
A: We did a summer internship around joke telling, but we didn’t generate any funny jokes. We’re also working on image generation and drawing generation [see example below]. In the future, I’d like to look more at areas related to design. Can we provide tools for architects or web page creators?
Q: How do you respond to art that you know comes from a computer?
A: When I was on the computer science faculty at University of Montreal [in Canada], I heard some computer music by a music faculty member, Jean Piché. He’d written a program that could generate music somewhat like that of the jazz pianist Keith Jarrett. It wasn’t nearly as engaging as the real Keith Jarrett! But I still really enjoyed it, because programming the algorithm is itself a creative act. I think knowing Jean and attributing this cool program to him made me much more responsive than I would have been otherwise.
Q: If abilities once thought to be uniquely human can be aped by an algorithm, should we think differently about them?
A: I think differently about chess now that machines can play it well. But I don’t see that chess-playing computers have devalued the game. People still love to play! And computers have become great tools for learning chess. Furthermore, I think it’s interesting to compare and contrast how chess masters approach the game versus how computers solve the problem—visualization and experience versus brute-force search, for example.
Q: How might people and machines collaborate to be more creative?
A: I think it’s an iterative process. Every new technology that made a difference in art took some time to figure out. I love to think of Magenta like an electric guitar. Rickenbacker and Gibson electrified guitars with the purpose of being loud enough to compete with other instruments onstage. Jimi Hendrix and Joni Mitchell and Marc Ribot and St. Vincent and a thousand other guitarists who pushed the envelope on how this instrument can be played were all using the instrument the wrong way, some said—retuning, distorting, bending strings, playing upside-down, using effects pedals, etc. No matter how fast machine learning advances in terms of generative models, artists will work faster to push the boundaries of what’s possible there, too.
Posted in Science Mag by Matthew Hutson