The way art is made has always been evolving with the times, be it through new techniques, new materials and tools, subject matter or mediums, among others. This constant changes are not likely to stop in the future, in fact, they will most likely be taking bigger and bigger leaps as time keeps passing, societies keep growing and technology keeps improving. This applies not only to painting and sculpture, but also music. Just as pictorial representation went from cave paintings made with natural pigments and water to digital illustrations made in electronic devices, music has been created in a good number of different ways throughout history, more recently with the growing possibility of relying on technologies like artificial intelligence not only for the recording, mixing and mastering processes, but also for composing and arranging.
Will the rapidly evolving technologies take over the future of music? Are these technologies enriching for the art of music creation, or is something getting lost when we use on them? Opinions are widely mixed on the matter of artificial intelligence being used for music creation, coming from artists, producers and consumers alike; but perhaps the answer is in none of the extremes and is, instead, a matter of balance.
Music is, undeniably, an integral part of our daily lives. It has become something that is present in our day-to-day activities, very much like a companion that we want to keep around as much as we can to make our everydayness all the more enjoyable. According to the International Federation of the Phonographic Industry’s Music Consumer Insight 2018 report, people around the world spend around 17.8 hours per week listening to music at all points of the day, through different formats and devices, with technology making it increasingly accessible. The same report also indicates a considerable preference for electronic music, which reached the third place in the chart of ‘The World’s Favorite Genres’. Furthermore, the Global Music report conducted by the same organization, states that the recorded music market grew by 9.7% in 2018, which marks four consecutive years of growth, while the total revenues for the same year were $19.1 billion (IFPI). It is expected that the industry and the market continue to expand and grow in the following years. Having looked at statistics like these, it is easy to confirm just how much interest and therefore just how large demand there is for music. Taking this into consideration, it only makes sense that the way music is being made will continue to evolve to meet the needs of fans around the world.
Just as the way of consuming music has changed with time, the same has happened with the way music is made. Technology has played a very important part in this evolution, providing increasingly complex tools to contribute to the creation of this particular artistic expression. For the future, we can only expect technology to continue making its way deeper into the processes that bring music to life. One of the most significant leaps in terms of tools for composing is artificial intelligence, and it would seem like it could easily become a convenient method for artists to adopt.
Taryn Southern, an American songwriter said: “In the near future, I’m pretty certain we’ll see artists soon using machine learning for a plethora of music applications — to mix and master their songs, help them identify unique chord progressions, alter instrumentation to change style, and determine more interesting melody structures.” (Inc.)
Her statement speaks of things that are perfectly possible with software that already exist and is being used by artists and producers, allowing for a collaboration between creativity and technology. Nevertheless, the implementation of artificial intelligence many times brings forth a lot of debate, and this application is no exception.
Once we start coming across artistic projects that use tools like artificial intelligence to create musical works, it is hard not to start wondering just how far said technology could go, just how much it could contribute to the process; and on the other hand, arises the question of how positive or negative such contributions are, and how much we as humans like or dislike the idea of art no longer being created exclusively by human creativity.
Neil Hannon, an Irish singer and composer expressed his lack of approval of the idea by saying he believed “music only gets worse the further you take people and humanity out of it.” And similarly, Marcus Mumford, an English musician, said “I don’t know what the future of music is going to look like but if I’m not playing I don’t want no part of it.” (BBC)
These are only two examples of standpoints that don’t support the close involvement of AI in the creation of their art, but this idea is shared by many others who also believe music loses way too much when technology is the one generating content. For several people, most of the value that music harbors comes from the mere humanity behind its creation and therefore, it should continue to be a matter of human creativity and expression.
There is, of course, a whole other side to the situation. Companies like IBM, Spotify and Google, among others, are working on software that will rely on AI to help artists create music, but there are already options out there that are being used for that very purpose, offering to provide a whole new experience to practice the art of making music. Amper is an example of software that allows people to “express themselves and express their creativity.” According to Columbia Business School, all that people need to know to make Amper generate musical content in seconds, is the mood and style they want to convey.
Drew Silverstein, co-founder and CEO of Amper Music says: “We think that the future of music will be created through the collaboration between humans and AI. And that in everything we do as a company, we want that collaborative experience to propel the creative process of our work. I think AI moving forward, whether it’s in the creative or non-creative spaces, will help creators be more creative, be more effective, be more efficient.” (Columbia Business School)
The purpose of tools like Amper seems to be, then, to allow a larger amount of people to practice this particular form of art with the help of AI, at the same time giving artists the possibility to put content out into the world more efficiently, which could potentially help meet the huge and constant demand for new music.
Music is perhaps one of the forms of art with the strongest the ability to resonate with us in the deepest of ways. This is one of the reasons it is so significantly woven into our daily lives, because it triggers our emotions like few things can, it changes our mood, it makes us dance and helps us sleep, it takes us back to both happy and sad memories, it tells us stories and speaks of concepts that simply ‘click’ with us. According to an article that described why out brains crave music, published on the TIME website, “music may tap into a brain mechanism that was key to our evolutionary progress. The ability to recognize patterns and generalize from experience, to predict what’s likely to happen in the future — in short, the ability to imagine — is something humans do far better than any other animals. It’s what allowed us to take over the world.” (TIME)
The previous statement speaks of imagination, a key element in the concept of creativity, which is at the same time closely related to artistic expression and to what we understand as art itself. The English Oxford Dictionaries, define art as “the expression or application of human creative skill and imagination, […], producing works to be appreciated primarily for their beauty or emotional power.” (Oxford Dictionaries)
Taking all of the above into consideration, it would seem music-making is understood a very human thing, taking its main value in the use of imagination to express something in a way that the result will appeal to the senses and emotions. AI, so far, is not capable of experimenting human emotions, but it is able to create patterns that represent human ideas and create with that as a basis.
So is there a place for this kind of technology in the artistic practice of making music, or does it take away artistic value? Until now, technologies like artificial intelligence have been used as a tool to collaborate with artists in their creative processes, but the possibilities are becoming increasingly larger, which in the future could potentially mean less and less human involvement while the artificially composed content becomes more regularly used. The response from creators and the public varies greatly, with some expressing concern because of how music might lose meaning and value when the participation of human creativity decreases in favor of the efficiency and approachability that AI offers; while others encourage artists to rely on these technologies for easier and more effective ways to express themselves through collaboration with AI.
We cannot tell for sure if the future of music will indeed be all about artificial intelligence, but I do think it is unlikely that it won’t continue to make its way into the music composing and producing realms. The demand for more musical content to be released won’t stop growing either, and AI certainly will become a very convenient tool to turn to. I wouldn’t say artists should reject the idea of the collaboration proposed by companies like Amper Music, but there certainly needs to be a balance between what AI contributes and what comes from human creativity, which I would say is absolutely irreplaceable. The way we humans can make our ideas, thoughts, unique concepts and emotions translate into art such as music should always continue to be cultivated, because it will become even more precious and valuable as our world gets increasingly consumed by technology.