Winter 2024-5 blog

René Descartes, the infamous author of “I think, therefore I am” was wrong; and with him, a whole lot of assumptions that continue to shape the world we’re living in today in some major ways. In fairness to Descartes, his purpose was to systematically challenge his own beliefs, not to unlock the secrets of what it means to be alive. There’s a book I bought off a discount rack at a now-defunct bookstore many years ago, something I thought might be worth the 3.99 I paid for it as a disaffected philosophy student, something I never got around to reading until fairly recently. The book is by a brain scientist named Antonio Damasio; and he seems to have spent his life studying how human brains work. Again and again, in study after study, he shows that it is feeling, not thinking, that makes us who we are.

A person with serious damage to the parts of the brain responsible for thinking and calculating can still learn how to get along in life. A person who suffers damage to the parts of the brain responsible for feeling also loses the ability to make decisions. Social psychologists have long recognized the ABCs of how humans work: affect, behavior, and cognition. We experience through our body, and signals travel through the nervous system, and all that input from our senses goes through the more primitive “hindbrain” stuff responsible for instinctive, fight-or-flight responses before it gets to the more complex neocortex. We feel, act, and then think about it later. Through the process of experiencing the world emotionally, as a being capable of learning from what brings pleasure and what brings pain, we learn how to navigate it; and this shapes who we become as people. Most of what we do is habitual, routine, and a lot of what’s going on in our heads isn’t deliberate processes. You don’t have to think about breathing to breathe, but it’s possible (and uplifting) to stop and recognize all those hidden parts and processes of being alive that we take for granted most of the time.

The point I’m making here is that a lot of the hype around “thinking machines” and artificial intelligence seems to be Team Descartes, ignoring that he got it really, really wrong when it comes to what it means “to be.” To that end, one of the amazing and disturbing things about living in the world today is the word “algorithm” has made its way into so many places. Most people know what algorithms do. An algorithm probably decided that you would be reading this post (or not) based on whether the words in it triggered some key words that suggested it might be interesting to you, and so it popped up in your social feed or search engine results or whatever. Algorithms decide what you see, watch, read; they steer you toward what to buy and where to invest, they decide how much to charge you for insurance, how to win baseball games, prosecute wars, catch welfare and tax cheats; and even whether you’re an efficient enough worker not to get fired.

But an algorithm is just a complicated mathematical model. Another word people learned about during the pandemic—model. Models take a bunch of information and find ways to meaningfully simplify it—they are useful fictions that can be used to learn things about a complicated world and make more rational decisions in it. Really fancy models go through what are called “iterations,” where they cycle through different ways to piece the data together until they find the “best fit” (which actually means “least worst fit relative to everything else already tried”). Besides background in sociology and philosophy, I teach statistics and spent enough time in graduate school studying model-building to have the equivalent of a master’s degree in it. Not bragging, just recognizing this is the nature of the society we live in, based on a lot of study. AI is essentially algorithmic. It’s not a person, and not “intelligent,” it’s just a model made up of lot of math, albeit done faster and more precisely than any army of humans could get through it.

We are on the verge of creating “Artificial General Intelligence,” or so the techno-optimists tell us. But artificial intelligence is, as AI expert Kate Crawford reminds us, neither artificial nor intelligent to begin with. The abilities of any AI are rooted in every level in human labor and processes, from the water, energy, and rare earth materials they greedily consume, to the human beings toiling behind the scenes to endlessly correct and perfect the models (not to mention we writers and artists whose labor has been harvested without our consent to train these models). AI is a kind of really complex model, and that’s it. Even if it can convince a human that there’s another human on the other side of the keyboard or the screen, even if it can churn out vaguely human-sounding text or odd-looking images that might be believable to a few, it is not a person, a sentient being, because these aren’t what defines a sentient being, deserving of rights and capable of being held responsible for wrongs. Right?

At what point, however, does it become sentient? And how different are we from a model? Humans also need water and energy and rare earth minerals to live too, after all. We learn how to be people from other people. And harvesting human creativity from the internet looks a bit like “learning,” like a big old nervous system gobbling up sensory input and processing it. AI might meet Descartes’ definition of human being—it may be able to think, if thinking is in the end just assembling together a bunch of information and making sense of it. But because Descartes got it all backwards, and the current hoopla around these technologies seems to have eagerly and unquestioningly followed suit.

Here's why: feeling, not thinking, defines life, not just in the human sense but all the way to the most basic form. It’s not just the ability to respond to the environment, but the process that leads to the kind of response that happens. The information of sight, sound, touch, doesn’t just “go into a thinking thing” that reacts based on that information. Yes, humans certainly have more complicated inner lives than walleye pike or amoebas. But that’s not just because of our ability to process more information, but because of our capacity to take in a greater range of more nuanced information too. Dogs can absorb a lot more data through hearing and smell than we can; their inner lives are undoubtedly fascinating in ways we can’t even imagine. Similarly, we have an ability to feel that is almost second to none, and that gives us something special. I learn not to hit people because being it hurts; I learn not to take without asking because I don’t like it when others do this to me. The ability to abstract based on how we feel and use that information to guide our decisions is the basis of empathy. Someone else doesn’t go in and carve out a new subroutine in my brain to create or recreate these decisions. The choices I make and the ways I change my mind based on new information follow (however imperfectly) from the sensory information I receive from experiencing the world; and how my nervous system makes sense of that information. Living is a dynamic process of ongoing connection, experience, stimuli, not just dumping a bunch of information into a processor that runs some calculations and regurgitates a response based on it.

Algorithms can’t do the sensitive ethical and political and other kind of difficult decision-making work that is exactly the kind of stuff humans are most prone to disagree about and need help with. Take driving, for example. Why aren’t there self-driving cars? There are. Why aren’t they on the road already? Because driving is an ethical activity. Ethics is not an algorithm. It is an ongoing process rooted in real-world experience—feeling. How do I decide whether to risk my life on an icy road to avoid hitting a cat or a squirrel? How do I decide where to angle my car when faced with loss of control? It’s possible after the fact to offer “hard cases” and then try to think about what the right thing to do would be—that’s what philosophers who study ethics do. But driving, and studying what I should or shouldn’t do while driving, aren’t the same kind of activity. Self-driving cars will decide what their manufacturers tell them to decide, and they will react in the ways their manufacturers have programmed them to react. Humans (for better or worse) don’t. Even with iterative self-correcting algorithms, they will never respond “I shouldn’t have hit that cat; it could be someone’s beloved pet” or “I shouldn’t have crashed the car and severely injured the driver to avoid a squirrel” in anything other than a technical rather than ethical sense. That’s because the car has no emotional connection to cats or squirrels or drivers. If a really “smart” car were on the road for long enough and was able to gather and process a whole cosmos of information through complicated sensory inputs, it might start to approach that threshold. But what are the ethics of driving a sentient car?

Take another more close-to-home example that directly confronts Descartes’ slogan: learning. It’s easy to imagine learning is like training an AI model; if Descartes were right, professors like me are there to shove a bunch of information at students, and hope those students run their iterative mental algorithms and retain at least a small portion of that information. Learning does not work like that. Emotional and physical connection are hallmarks of learning; it’s probably why students who attend courses in-person do better (particularly younger students, up through college-age emerging adults). It’s not so much about the teacher (though surely treating students with respect matters) but about the peer group and the ability to have the sensory experiences of being in a room with others who are physically engaged in learning; the habits of body and mind that come with that. I remember chewing gum when studying for tests and chewing the same kind of gum when taking the tests because I knew that associative memory emerges from sights and sounds and other input that go through the nervous system and activate emotion first. Connecting new information to old information in a meaningful way (even if the old information isn’t directly intellectually related) is part of the process.

Even the best algorithms don’t teach, they just learn what you know or what you’re interested in; and tailor their feedback to what you respond to. They’re good for selling you products or outflanking you with propaganda, but bad at feeding your mind. It’s confirmation bias on steroids, and learning is not about validating opinions—sometimes it is about challenging them. And yes, chatbots can write mediocre or derivative prose. It’s easy for me to define learning in my classes with the use of these tools. They are presumably useful tools, after all, and students will find out about them sooner or later (though I personally have yet to see AI do anything other than create more work for me). Copying and pasting chatbot output is not learning because the sensory experience is so shallow. It’s also plagiarism, because it’s passing off not-your work as yours. Using a chatbot prompt as a start, then building on it and restructuring it to make sense of a subject in a specific context means engaging the senses, sight, touch, emotion. That is learning.

I’m not afraid of AI as a technology or as a tool. I recognize it’s quite possible that it will one day really become sentient, perhaps without also having the capacity for emotion and empathy and other things I’ve talked about here. That would be really, really bad. But I don’t think we’re anywhere near that right now, and a lot of the hype is just that. What I do worry about is that AI as it currently exists will make a handful of people extraordinarily wealthy and powerful while using that wealth and power to force the rest of us out of work and into some kind of techno-servitude. That’s not because of AI, but because of the flaws built into human nature when confronted by some kind of powerful technology. And it is not some inevitable process that will lead there, but the deliberate decisions of people, including you and me, who could have done otherwise.

It is possible, perhaps likely, that there will be people who decide that my arguments here about feelings and empathy are quaint and should be ignored, or that a complicated statistical model is human in all the ways that count, because what literally all that “counts” is increasing someone else’s net worth (that’s what the phrase “economically useful activity,” a current hallmark of Artificial General Intelligence, means, after all). It is possible all my blog entries and writings have been, or will be, used as “scrapings” to train some other chat bot that will be used to try to cheat through my future classes or get “content” without having to put up with artists and scholars and these kinds of arguments about why feelings matter. But I will write so long as I can and hope for the best. That’s a feeling too, and it keeps me going forward.

I don’t hate AI either; and though I have yet to find it useful or beneficial for any purpose, I know there are a lot of purposes out there for which it is quite useful. I do not use AI to write or create. I never have; and I have no immediate plans to do so. I like feeling my way through life in all its wonder and complexity, and writing and creating, teaching and learning, are essential parts of that for me. The only long-term question is whether we will continue to live in a society that values these things. And that isn’t up to any artificial intelligence, no matter how general. It is up to you.

 

Sources/further reading:

Kate Crawford, Atlas of AI

Antonio Damasio, Looking for Spinoza: Joy, Sorrow, and the Feeling Brain

Rene Descartes, Meditations on a First Philosophy

Cathy O’Neil, Weapons of Math Destruction

 

Image: Descartes, portrait, Frans Hals (public domain)

Previous
Previous

A Question of Prejudice

Next
Next

Anthrorusticaphobia