Will machines ever be able to replace or replicate human creativity? That is a question that we repeatedly ask ourselves as we continue to innovate and invent new creative tools. The printing press, the gramophone, the camera, the camcorder, the typewriter, the synthesizer, word processors, photo editing software, and many other tools we have invented over the past centuries have brought fundamental changes to creativity and arts.
But what has remained constant throughout history is the human element. Though affected by those inventions, human thought has remained central to creativity.
Will that change with artificial intelligence? I think not .
It was with that mindset that I picked up Arthur I. Miller’s new book The Artist in the Machine: The World of AI-Powered Creativity . And while I can’t say that the book changed my mind—and I don’t think Miller’s goal is to assert that AI will replace human creativity—I have come to better appreciate the changes that AI is bringing to our creative arts.
The role of artificial intelligence in creative arts
The interest in using AI algorithms to create works of art is not new. It dates back to decades ago. But recent advances in neural networks and deep learning has spurred innovation and activity in the field. In The Artist in the Machine , Miller has done a great job of taking stock of dozens of projects and initiatives that explore the use of different AI technologies in creating visual arts, music, and poems and stories.
He also interviews many people who are at the forefront of creative AI (several of whom I’ve had the pleasure to speak to in the past few years).
While the list is too comprehensive to cover all of it here, but here are some of the key advances spurred by AI and the implications they have for creativity.
One of the areas where artificial intelligence has made the greatest contribution is to enable more people to express themselves creatively, regardless of their skills in wielding brushes and pastels. Artists will also find inspiration and new ideas through the eccentric workings of AI algorithms.
One interesting application is style transfer. A well-trained neural network can map the style of one image onto another one. It was first proposed in 2015 by Leon Gatys in a paper titled “ A Neural Algorithm of Artistic Style ” in 2015. The technique uses convolutional neural networks to transfer the style of one image to another. For instance, you can take a photograph and a van Gogh painting and apply the style of the latter to the former. Here’s Gatys explaining the technique in his own words.
Style transfer has become popular and has found commercial applications in social media platforms. “I want to have a machine that perceives the world in a similar way as we do, then to use that machine to create something that is exciting to us,” Gatys tells Miller in The Artist in the Machine .
Pix2Pix, another AI algorithm, can convert a rough sketch, say the silhouette of a handbag or a shoe, into a real photograph. Pix2Pix uses a specialized form of generative adversarial networks (GAN), a type of AI algorithm that has become famous for creating fake photorealistic faces and wild works of art. GANs have been pivotal to many creative AI projects, including a painting that sold for more than $432,000 .
“Pix2Pix empowers people who may not have the requisite motor skills and technical skills to express their creativity,” says Phillip Isola, the creator of Pix2Pix. “It allows mixing of science and art together, offering a means to show data in a way that’s provocative, emotional, and compelling.”
Artist Mario Klingerman used Pix2Pix to transform portraits into eerie, award-winning paintings. He believes that AI can help spur human creativity to a new level. In The Artist in the Machine , he tells Miller that we humans are incapable of creativity because we only build on what we have learned and what others have done. But machines can create from scratch and will one day liberate us. “I hope machines will have a rather different sort of creativity and open up different doors,” he said.
AI creativity has also found its way into the music industry. There have been several exploratory projects on using AI to compose music. So far, there has been remarkable progress, but for the most part, the algorithms can come up with interesting structures that need to be further developed by a human composer.
One of the notable projects in the field was Flow Machine, led by François Pachet, the director of the Spotify Creator Technology Research Lab. Flow Machine uses Markov models to analyze musical patterns and create new ones. In 2016, Pachet used The Flow Machine to create “Daddy’s Car,” a song inspired by the works of Beatles. The AI put together the basic tune. Human composes then complemented the work with harmonies, instrumentation, and lyrics.
“For me, creativity is pretty much a social thing, not an objective thing, especially in music,” Pachet tells Miller in The Artist in the Machine . “Society will decide whether someone is creative or not.”
Another interesting project (which I had the chance to explore a few years ago ) is folk-rnn, a recurrent neural network created by researchers at Kingston University and Queen Mary University of London.
folk-rnn has been trained on a corpus of Irish folklore music and can generate sequences of notes that are strikingly similar to what a listener would expect when hearing Celtic music. Musicians I had talked to spoke well of the results of folk-rnn. Although in many cases, the songs needed to be tuned and adjusted, they provided some very interesting sequences that inspired new ideas.
In The Artist in the Machine , Dr. Bob Sturm, a digital media lecturer at Queen Mary University and co-creator of folk-rnn, tells Miller that the AI is “really not there yet” and asserts that he doesn’t think the AI is composing. “It’s creating transcriptions that can be used to create music. It’s hard to keep this distinction going because the Daily Mail will write that machines are going to replace composers. We want to avoid any machine versus human message; the computer is meant to enhance our creativity in music-making ,” he says.
For other composers, such as David Cope, the key to AI creativity lies in humans changing their perception. Miller writes: “Cope feels computers can be creative and that to enjoy his music listeners need to set aside their prejudices about creativity being unique to people. But he also believes that computers are only machines and lack human qualities such as emotions.”
Others have tried to help people overcome those prejudices by anthropomorphizing artificial intelligence . In The Artist in the Machine , Miller explores Haile, Shimon, and Shimi, three robots created by Gil Weinberg and Mason Bretan from Georgia Institute of technology. The robots don’t sport any special technology that would set them apart from other creative artificial intelligence projects.
But what makes them different is that they physically play their songs on musical instruments as opposed to using a media player application. Shimon can also improvise note sequences in real-time using artificial intelligence algorithms that analyze the music being played by other people. Shimi taps its feet and nods its head. And Miller describes Haile, created in 2006, as “the first robot to physically make music rather than playing through speakers.”
“Through the power of artificial intelligence, signal processing, and engineering I firmly believe it is possible for machines to be artistic, creative, and inspirational,” Bretan told Miller. Bretan also believes that the robot’s most important quality is not so much its music but how “the robot moves its head when it detects a beat, how it looks at people. There is much more to music-making than what note has been generated. There is physicality and embodiment. That’s what’s cool and makes people buy tickets.”
Understanding the creative process
There’s a lot of literature that compares the human brain and computers as information processing machines. Per Miller: “The definition of creativity as the production of new knowledge from already existing knowledge, accomplished by problem-solving, applies equally to the brain as an information-processing system and to the computer. It takes into account both the final product and the process of producing it. For us, thinking consists of receiving perceptions that the brain acts on and uses to create new knowledge. Similarly, the computer is fed data, which it processes and uses to generate, for example, art, literature, or music.”
And as long as we think of creativity as pure information processing, it’s easy to envision artificial intelligence as replacing it in its entirety. AI algorithms can work wonders when you know what you want to solve. This stands for fields such as classifying images, recognizing speech, and playing games . In each of these cases, you either know the exact rules of solving the problem or have plenty of examples that can map input (e.g. image) to their corresponding output (e.g. label). Therefore, you can create an AI algorithm that replicates the same results as humans with remarkable accuracy.
But human creativity is much more than just mapping inputs to outputs. Miller breaks down creativity into four stages: Conscious thought, unconscious thought, illumination, and verification.
“Consciously working on a problem primes the unconscious to continue this work, even when we are no longer consciously thinking about it,” Miller writes. This is why you read all those stories about great inventors having moments of epiphany while in the shower or strolling in the woods.
In this sense, creativity is an intimately human and complicated experience. You can’t provide a recipe for it because there are so many moving parts, and we still don’t understand many of them. Aside from your active thoughts, your past experiences play a very important role in the decisions you make. In this respect, every person and every creative work is unique.
Therefore, it would be virtually impossible to create a rule-based AI system that could imitate the human creative process. Neither could you gather enough examples that can encompass creativity as a whole and be used to train a neural network on creativity.
Our creativity is also very subjective, and you’ll rarely find two people that will totally agree on what is and isn’t creative. Current AI systems can, at best, replicate parts of the creative process, but fail to recreate it in its entirety.
But in The Artist in the Machine, Miller provides an alternative explanation to all the jumble of thoughts and emotions that go into the human creative process. “Essential to the process are information on the problem at hand, background knowledge, and reasoning methods,” he writes. “The brain assesses each of the resulting combinations of facts using aesthetics along with other criteria, depending on the field. We then reject most combinations, sometimes using our intuition. Intuition is a much-misunderstood notion. It is nothing more than the culmination of experience, of having made numerous mistakes and thought deeply about them.”
In a way, he might be right. Many tasks we thought required attributes that are unique to the human mind have proven to be solvable through pure mathematics. Think about the brute-force search algorithms that powered Deep Blue, or the deep learning and tree search algorithms that powered AlphaGo. None of those AI systems possessed the commonsense and general problem–solving capabilities of the human brain. Yet they proved to master complicated games at a level that rivals and exceeds that of human champions.
As Miller says, “Machines can increasingly teach themselves how to perform complex tasks that not long ago were thought to require the unique intelligence of humans.”
For the moment, we don’t have any evidence that shows human creativity can also be encoded in dumb, information processing algorithms. But like previous successes in AI, we might find surprises in our own creations.
This story is republished from TechTalks , the blog that explores how technology is solving problems… and creating new ones. Like them on Facebook here and follow them down here: