Autumn 2025 blog entry

I’m coming up on 30 years since the first time I picked up a guitar. Music is still something I do on my own time but was once something I pursued as a career. Having dipped my feet back into music roughly three summers ago (part social science research and part therapeutic catharsis), there is increasing buzz around AI and music. At first, people were using AI for cover art, then music videos; it is now possible and ever more common to create an entire song using this technology.

As someone who has worked in minutes at a time to express myself musically and to uplift independent and emerging artists through all-inclusive playlisting, I chose to respond to this by excluding fully AI-generated music. I do not use the technology for any of my creative pursuits; and only playlist music that features at least one human being playing an instrument in real time on the recording. My reasons for this are practical and political; and they are underpinned by a critical examination of AI as a technology and its current state. This is one of those rare moments when I get to combine artistic and scholarly passion (though arguably that’s what this blog is all about).

I am not a tech-hating Luddite, having been an active Netizen (if you don’t know the word, you weren’t there) and computer geek since childhood. Nor am I a reactionary idealogue, bitterly comparing a wayward present to an idealized fiction of the past. But I have mostly found that AI, when applied to activities that are creative, or that demand high levels of critical thinking, produces mediocre output and ultimately more work for me. That doesn’t mean I am or wish to be willfully ignorant of it or its applications. As a scholar I am always interested in how things work, and that took me down the rabbit-hole over the past couple of years in terms of this technology, all the way back to Alan Turing’s seminal 1950 paper “Computing Machinery and Intelligence.” This blog is something of a follow-up to a past blog “I Feel, Therefore I Am” which, unlike this one, detailed what it would mean to be intelligent and why what we currently call AI, which is still rooted in mathematical algorithms and models, isn’t.

There are a lot of criticisms of AI. Some strike me as valid; others do not. Sure, AI is neither artificial nor intelligent, as Kate Crawford reminds us in Atlas of AI, but the practical problem lies in this claim being used to avoid uncomfortable conversations about the social problems that are made worse by this technology as it currently exists. There are the large amounts of “ghost labor”—precarious, low-paid, human labor that is now being even more intensively exploited to maintain the illusion of artificial intelligence. And the massive environmental toll of generating AI content—the amount of water, rare earth materials, and electricity that are used accelerate ongoing environmental change and the hazards that come with it. Emerging studies show that students who use AI instead of doing their own reading and writing are becoming cognitively stunted, unable and unwilling to think complex thoughts or grapple with difficult ideas. That’s doubly a problem when considering how the technology is already being used.

Tech oligarchs like Elon Musk have already shown the public, with his Grok AI tool on X/Twitter, that they control the algorithm—when his AI began contradicting him and calling him out for spreading misinformation, he simply had it reprogrammed. AI doesn’t challenge you to think for yourself; it flatters your own preconceptions, meaning even more problems with misinformation, disinformation, and alienation than have already emerged from social media and other interactive tech. Lonely? Program a bot to keep you company. It will never contradict you, refuse you, call you out, challenge you. A passive and endlessly manipulable friend or lover. Of course, it may also indulge your darker ideations, as has been seen with young people who took their own lives after apparent encouragement from AI. Think people are self-centered now? Just wait.

It is also the perfect ideal worker—it does not get tired or hungry or lonely or sick; it will never countermand your orders, no matter how questionable. This, of course, means the potential for large swaths of current jobs to be replaced, sooner or later. In the U.S., a society that seems ever more devoted to fiscal austerity and consolidation of wealth, there does not seem to be any plan for a future in which tens of millions more people are unemployed at any given time—“let them eat cake” comes readily to mind. I finished writing my second book over the summer; it’s under review now (hence the delay in writing this blog). There’s a whole chapter on these hazards and how they’re linked together.

I recognize that none of these bad effects are inevitable or permanent; they stem not necessarily from the technology itself but from the all too human foolhardiness, haste, and megalomania that seems to accompany any race to innovate. The point is that these social problems have arisen, seem likely to become worse in the future, and are being largely ignored or denied by the elected officials and captains of industry responsible for deciding what to do about them. More broadly, living in a world populated by big and growing hazards created by scientific and technological progress (beginning with the first atomic bomb) risks the future wellbeing and even existence of our species and the biosphere on which we depend. That, to me, is the single most important problem of our time, and this technology is just one of many current examples of that.

But when I pause to think about exactly what AI applications in a certain domain, such as music, should be allowed or forbidden, I am at something of a loss. This is a shift from social problems to the aesthetic, artistic, dimension. I can’t seem to escape the Enlightenment tendency to treat the aesthetic as something subjective, relative to the eye of the beholder. As a middle-aged guitar player who has spent thousands of hours learning to play, there’s the tempting reactionary tack, that people who use AI to make music aren’t “really” musicians, because they don’t “really” play an “instrument.” Here’s why I think that’s arbitrary, silly nonsense:

There was a period in the 80s when people proclaimed the death of rock music; the synthesizer was going to replace the guitar. Then there was the 90s grunge renaissance and those folks got really quiet. I grew up with techno music (now more properly EDM, electronic dance music, and its many offshoots) and rap/hip hop (obviously not the same thing), some of the early efforts to more thoroughly incorporate machines and computers into music. People (often middle-aged white dudes with guitars) back in the 1990s and early 2000s claimed that these styles of music weren’t “really” music because they were “just pushing buttons.”

Having spent time in a recording studio with professional hip hop artists and producers, I can safely say that there is nothing “not music” about what they do, if by “not music” one means “it doesn’t take a lot of skill or care to do it.” Those guys worked just as hard as any rock or metal band I’ve ever worked with—maybe harder. And I have worked with a lot of bands and artists, dabbling as a singer/songwriter, busker, session musician, booking agent, promoter, live sound tech, and music venue manager in the past.

In the early days of sampling (particularly popular in rap and hip hop but increasingly part of industrial and many other genres, really taking off roughly three or four decades ago) it was permissible to just take pieces of others’ music, reassemble them, and use them to make an original song. Standards changed over time to better protect the intellectual property of artists, shortening the length of royalty-free “fair use” samples and offering more legal precedent to protect creative works. Artists like M.C. Hammer and Vanilla Ice found themselves in expensive court battles after selling millions of records, paying out to Rick James, Freddy Mercury, and David Bowie for creating songs using samples of their music.

I use this historical analogy to raise legal issues around what is happening to people and what they create as this technology unfolds. I do think that something like what happened with sampling should happen with AI—tech companies should be required to pay royalties when scraping what artists create and distribute. I even think we may be moving toward an era in which it will be important for everyone to own their own image and likeness, making it illegal for one’s voice and face and creative work to be harvested or used without their consent. Doing something like this might sound radical, but could be advanced through common sense, even conservative, law and policy channels, building on existing precedents (though it will be slow, as there is a lot of powerful resistance to regulating AI from vested interests—enough that this post will probably be partially censored by the algorithms on this platform).

I see AI music maybe becoming a new trend that may take its own way, blending into existing genres, or fading away as a short-term fad. Who knows? I don’t see AI replacing music or musicians as such, though that is a matter of what people do and don’t listen to and playlist. Having been a season ticket holder to the symphony, and now singing low bass in the local choir chorale, I know I can listen to a million people on the Internet playing symphonic music on keyboards and synthesizers or singing through the host of quantized pitch-corrected effects-heavy processing that is industry standard; it doesn’t make the symphony or the choir worth any less.

The world is full of musical styles, from Indigenous drum-based songs to Indian ragas to Himalayan chants and more. There will, I confidently predict, always be demand for a broad range of musical styles, and the styles that are more difficult and more expensive and rarer will cost more money, be more exclusive, garner greater cultural capital…just like they already do and always have done. AI or no, the future may be more musical than the past ever was. Of course, it’s ok to like (or not) what you like (or don’t). But using your taste to draw the line between what is or isn’t “really music” is prejudice, not reason.

I can hear my fellow musicians lament that I don’t understand just how much AI can do, that it’s going to replace a lot of paid human labor in the music industry and already is. Of course, every AI-generated album or single cover is a graphic designer not getting paid. And every generated video is potentially a whole team of video recording and producing experts not getting paid. And I have already mentioned that AI is essentially automation that may replace a lot of jobs humans do now. So, I understand this argument.

But I’m not sure it’s that simple with art. Here’s why: when I was young, I recorded my own drum tracks on all my stuff with a drum kit. By self-producing I was making a sacrifice based on my lack of time and money; I wasn’t recording in my bedroom at my parents’ house because I was putting an army of producers and a million-dollar studio out of business with my little eight-track digital recorder. Later I started using drum samples and recently switched to a drum machine and a couple of backing tracks so I could perform my work. Again, I am not putting anyone out of work; I can do the vocals, guitar, and some of the keys in real time; but I was never going to hire someone to drum, play bass, or play synth in the first place. I don’t have the time, money, or desire (and I live in a shack in the north woods of Minnesota, far from any burgeoning “scene” so that’s irrelevant); I found workarounds.

A lot of independent and underground artists are in the same boat—it’s expensive and time-consuming to make music, and the choice is recording that track or making that video or that album cover using technology (including AI) to do it or never getting it to happen at all. I think we all know we would get better quality results if we hired a professional graphic designer for the album art and hired an actual producer and engineer for recording and mastering and used instruments played by seasoned session musicians properly and diligently recorded and hired a team to produce a really slick music video. Major artists still do that—and at least some probably always will, to some degree—because they have the time and the money to do it. Again, it’s not the technology itself, nor how individuals use it, but the social consequences, that are significant.

Then there is the “slop problem,” as it’s called. As making music and other art forms have become cheaper and easier, more people are doing it. AI is pushing those barriers ever closer to zero, a place where it costs a person almost no time or money (relatively speaking) to generate vast amounts of content. But the world is already flooded with music and other content. I guess my initial reaction is: so what? If AI generated music is plastic or soulless or bad, people won’t listen to it. And even if it takes almost no effort to create something, that doesn’t mean people will simply embrace it. It’s also true (and happening now) that AI art isn’t just effortless slop; there are many artists and design experts who blend AI and human elements to create something new, who are putting in just as much work as humans creating art without high tech.

But considering further, at the social rather than the individual level, if even passable AI music can be generated very quickly in large amounts (which it already can be), and pushed using the increasingly-ubiquitous cottage industry of fake engagement, this crowds out artists who are putting in more work and ruins the overall quality of experience for everyone. Major labels and artists already unfairly manipulate streaming services to their interests; the cards are stacked against independent artists from the beginning. But imagine logging into your favorite streaming platform and hearing an endless cavalcade of unappealing, mechanical, low-effort slop—you’d cancel your subscription quickly.

That’s the practical reason why I hold space on my playlists and various radars for independent musicians who play instruments and play shows, for those who put in the years of hard work to learn an instrument—because they and their sonic craftsmanship are a barrier to soulless slop, and uplifting them conserves something that I think is worth conserving. I hope in the long term that others agree and continue to gravitate toward music created by human hands and muscles and voices and hearts, but there is nothing inevitable either way. So, this might be a conservative approach—while also being a punk-rock-inspired rejection of both a slop-o-centric status quo and major-label hegemony in the world of streaming and content creation—but it is not a reactionary one. I am engaged in taking a practical stand, a political choice to decide what kind of world I want to live in by deciding what kind of music I want to uplift and exclude in my own small way.

This raises another practical question, as to what music is really for. When I write a song and record a song, I put a lot of myself into that process; the song reflects feeling, captured and bottled and shared. But on the other side I don’t know what people really do with my music once it’s out there. They may ignore it, ridicule it, listen to it passively while making love or cleaning toilets or who knows what else; or they may really sit still and connect with it. It’s hard to know what’s going to “hit”—even the pros (I am at best a seasoned dilettante) will tell you that. There is a constant tension that artists face between being true to oneself and responding to outside feedback, trying to keep yourself in a world devoted to channeling and marketing illusion.

A question about authenticity arises, if not an especially valid criticism. What was once considered unacceptably inauthentic in the past is now widely accepted. Remember Milli Vanilli? Canceled because it was found out they lip-synched their songs which were actually sung by other people. Lip-synching is now widely used, and in some domains like dance-heavy pop performances, accepted and expected. Backing tracks on stage? No problem. Rock and metal acts that can’t perform without laptop computers? Of course. Backstage at a big show, you would be surprised what goes into some “shows” (and what doesn’t—how much of what you hear and see is essentially being pantomimed on stage to pre-recorded tracks).

But are drum machines and pre-programmed widgets less authentic than getting up there and just playing? Is it ok to run a metronome in your in-ear monitors so everyone sounds right on time, like in the studio? Where’s the line? Approaching it like this seems arbitrary and reactionary too. And practically pointless—people didn’t summarily smash all the acoustic guitars when the first electric guitar was invented, any more than drummers stopped drumming when I plugged in a drum machine. Then there’s the part about finding out that people who are literally in the business of selling entertaining illusion in the form of a “show” are being inauthentic. Please…

At the same time, it’s hard to know what’s real anymore because so much is fed through algorithms. At the same time, there is so much competition for your attention, and so much economic uncertainty, people are sitting at home doomscrolling or watching their streaming subscriptions instead of going out to the show. And with prices on food, fuel, and more on the rise, coupled with the rise of streaming and the fact that it doesn’t pay the bills, it’s harder for bands and artists to play live and go on tour and make any kind of financial return (it was never easy). That creates an even bigger space where individuals will look for cheaper and easier ways to create, even if it compromises the overall experience for everyone.

I think what worries me more is what led us to this point, where these issues of art and technology are the fodder for squabbling about the future of human beings as such, which is of course the bigger issue. Not because of AI music or digital recording or rock ‘n roll with laptops and drum machines. Because of an underlying neurotic quest for perfection, efficiency, for more at all costs, that underpins these endeavors. The horizon of that is turning music into a thing that can be shaped to the will of markets and algorithms, an ideal worker like the AI bots, something that no longer requires pesky subversive weirdos and their creative hearts and minds at all. Art without artists. Music manufactured in massive computer labs, scientifically designed to be pleasing and catchy. Algorithms manipulating masses of overwhelmed, anxious, and confused people who know next to nothing about algorithms. Propaganda on steroids. It’s been coming for a long time. It’s the warning of books like 1984 and Brave New World—note how they prominently feature fully automated, manufactured art as a key element in their totalitarian dystopian visions.

That is why I ask myself, and I ask every human being to consider, while we still can: what is AI for, really? Why do we need it? Why should we want it? And what is it doing to us and the world in which we must not only live, but which is already looking increasingly precarious for future generations? These bigger questions are upstream from asking what the future of music or any other specific human creative endeavor looks like. But they also offer a place to consider the immense benefits and significant drawbacks of an emerging legion of technologies that have received a great deal of hype and far less critical examination.

Sources: I’m writing a book. More on that soon…

Image Credit: Alan Turing, 1912-1954, “Father of the Computer” who cracked the ENIGMA Code that helped the Allies win World War II, and was later chemically castrated for homosexuality, after which he took his own life.

Next
Next

Reflecting on Narcissus