Why AI Started With Art
I think it has become widely accepted that AI is here to stay, slowly, or quickly depending on your point of view, making its way into our everyday lives in many unexpected ways.
As a music producer, I have to admit I was initially surprised that the creative industries were among the first to become saturated. First came the deepfakes, AI artwork and video generators, and then the music generators appeared. Anyone using Logic Pro will know that AI is already embedded in many of its tools and plug-ins.
Like many people, I assumed AI would first appear in industries such as finance, law, or medicine, professions that deal predominantly with numbers and data. I made that assumption because of my basic understanding of computing.
But it turns out that the companies developing and training AI need to let it evolve in the real world in a way that is safe and does not put people's money, careers, or lives at risk. It stands to reason that allowing AI to try its virtual hand at art, whether music or visual, gives it room to fail repeatedly while improving along the way. We've all seen the evolution of the Will Smith eating spaghetti video, and the improvement over time is undeniable. Music generation is continuing to improve as well.
More and more often these days, clients send me song mock-ups they have generated using online AI music makers. In truth, it is not really that different from sending me a rough recording of their song along with some youtube references for style and inspiration. My only hope is that we are not led too heavily by the ideas the AI has added.
It is a strange new phenomenon to hear a 30-second track that sounds professionally mixed and mastered, complete with pitch-perfect vocals, only to dissect it and realise just how odd and unnatural the composition is. Sometimes I hear ten different guitar parts appearing throughout the song. Chord patterns may change between verses, or sections seem disconnected from one another.
Over the years, I have been asked to reproduce many classic songs, and I have become quite proficient at "reverse engineering" a track. I have learned how songs have traditionally been built and uncovered the underlying patterns and logic. There is usually a very satisfying symmetry to those creations.
AI has not quite reached that point yet. At the moment, it is still approximating and parroting musical styles based on the vast amount of material it has access to. I am not saying this will always be the case. I am sure AI will continue to refine its accuracy and attention to nuance.
But I think the reason our industry has been targeted first is because the consequences of getting it almost right, but not quite perfect, seem far less serious in the eyes of the general population.
Sadly for the wider public, AI has already reached a level that sounds real enough. I have stumbled across several AI playlists on Spotify, churning out endless background music that could be left playing through dinner without much scrutiny. It is only musicians like me who notice that the music has no shape or cohesive chord pattern. In some cases there is not even a structure. It is simply a linear stream of musical doodling that fades out after a few minutes.
That music is now contributing to the white noise that real artists have to compete with.
My hope is that there will eventually be a shift in attitude. In the same way people are becoming tired of fake news and AI slop on social media, they are beginning to crave genuine communication again.
Let us hope that, eventually, this extends to art and music as well.