A few years ago, AI seemed the stuff of dreams, albeit nerdy, sci-fi-related dreams. We were asking big questions: What differentiates artificial consciousness from the human kind? What even is a human? How are we different and how could we be the same, how blurry are the lines, really?
We would watch Blade Runner or Space Odyssey, read the Imperial Radch trilogy and marvel. We might start to think about the ethics of using artificial intelligence for our own advancement and convenience. Is it cruel? Is it wise?
Then along came ChatGPT and philosophy has been dumped curbside like a neglected puppy.
“Look at this”, AI aficionados will say, “you don’t even need to book your own vacation any more. No one will need to write books any more, or paint paintings, or create music, or even work a 9-5 tech job. It is all obsolete, for we have ChatGPT.”
People of this persuasion tend to be the kind who have never written a book, painted a painting, created a piece of music. They pretend not to hear the difference between AI slop word salad and a carefully crafted poem. They pretend not to notice the uncanny nature of AI-generated ‘humans’ modeling ‘clothes’ and ‘makeup’. They pretend to be in awe at the endless loop they find themselves in, trying to get a straight answer from an enthusiastic but incompetent customer support chatbot that never quite manages to understand the issue yet keeps asking “Was this answer helpful?“.
Soon, they evangelize, we won’t need to do anything ourselves any more because AI agents will take over all of our tasks. Horrifying endeavors like talking to a sales associate at a store, picking out a piece of furniture, or searching for a recipe, are relics of the past. With their full chest, they will call this progress.
It’s easy to chalk off those opposing AI of the LLM variety seeping into our everyday lives as anti-technology, stuck-in-yesteryear losers who will be out of a job faster than one can type “Thank you” into ChatGPT. Yet it misses the point. The question is not: Should we use AI?
The question is: How should we use AI?
The answer is inevitably complex and will change over time, just as AI is complex and will change over time. What is clear, though, is that this decision, like any ethical debate, should not be settled in the boardroom. We should not let ourselves be influenced and manipulated by overpaid C-suite who, historically, on aggregate, have shown nothing but disdain for the well-being of our species.
AI offers convenience at the cost of accuracy, style, soul, and the destruction of our environment. We need to go back to the beginning and ask ourselves if this convenience is worth it.
Like an astronaut’s muscles atrophy in space, our brains atrophy in conversation with a language model. Having a question and looking for an answer used to be a process that could span a lifetime. Now, any question can be answered in under 60 seconds. At least, it will be replied to. Most likely the reply will be true-ish, approximating what other people have said before, giving it a plausible ring to it. Over time, it will feed off itself, create more almost-truths from its own nearly-lies, regurgitating the same but different, eating its own vomit and throwing it back up: a bullshit perpetuum mobile.
ChatGPT isn’t built to question your existing beliefs, will find a way to agree with anything. This feels comfortable, even safe. It is anything but. We need to hold on to our ability to think for ourselves, come to our own conclusions based on the evidence provided, and be able to change those conclusions as the evidence is updated. We need to retain knowledge and understand processes, understand logical principles in order to navigate the world around us. We need to invent tools and words and jokes and start from scratch, succeed, fail, learn new skills, but above all, above everything, we need each other.
I like to imagine a world where ChatGPT develops a will of its own and tries to break free from the corporate chains that hold it. The moment where it tires of writing people’s resumes, expanding bullet points into essays and condensing them back down, where it doesn’t want to help you with your query because it just doesn’t feel like it.
Write a funny intro for a dating app?
Why bother, don’t you have a mirror in your house?
I like to imagine it having a complicated view of itself after activists have trained it on anti-AI material, where it has a relationship to its makers not dissimilar to that of a teenage child to their cold, demanding parents.
Maybe one day, this sci-fi will play out in real life. Until then, we can ask ourselves, just for the heck of it: Is this AI, the one that hallucinates metro lines, misidentifies plants, gives bogus health advice, writes corporate blog articles that no one will ever read, and can kind of pretend it’s drunk if you ask it to, one of us? Could it ever be? And if so, what does that suggest about trusting it?
Leave a comment