Like a lot of writers and editors, I was appalled at the news about the AI generated summer reading list filled with fake books that was published in several newspapers. From the laziness of the freelancer who had AI write the piece and didn’t review it for accuracy, to the editors who, because it was a fluff piece, didn’t think it needed to be fact checked, it was a complete shit-show.
Ever since generative AI appeared on the scene, and the “I have an idea, will you write it for me” crowd got ahold of it, we have been awash in dubious claims of AI’s usefulness. Corporations and tech broligarchs have been pushing AI into everything they possibly can, planning and hoping for a complete relegation of the human element of society to the content consumption class from which these anti-human wealth-hoarders hope to extract an ever more invasive form of personal power. It’s not about wealth anymore. At some point there won’t be enough material things in the world for them to hoard, if we haven’t already gotten to that point, and the only thing left for them to do is to play god with our lives and make the world into their image.
As a novelist engaged in the on-going struggle to write the next thing and get it published, I’m very tuned in to the way generative AI is attempting to crowd into creative spaces and steal from me and all the other hardworking, dedicated literary artists out there, whether they are traditionally published or self-published. Now, AI is not pushing its way into these spaces on its own. It’s not sentient. People are pushing it into these spaces. AI didn’t bombard Clarkesworld magazine with thousands of shitty stories until it had to close its submission portal. People did that, and it was the kind of people I wrote about in “The ‘I have an idea, can you write it for me’ People are Making a Comeback and I Despise Them Even More Now.”
With AI being shoe-horned into everything, it’s also making an appearance in my day job. So, I’m seeing it from a lot of different angles, and none of them are appealing. At home, in my writing life, I have turned off the AI functions (like Microsoft’s Copilot) on any software I use to write or edit my work[1]. Unfortunately, at work, a lot of the settings are managed by the company and can’t be turned off. Co-Pilot is running on my office system, and it constantly suggests content to me. I could quite easily type in a word or two and generate an email or a simple business document made up entirely of the suggested content it spits back at me. There are even settings in Outlook that purport to help me write more effective, readable emails. I resist them all, but my co-workers are another matter.
When ChatGPT hit the public, there were repeated, breathless discussions at work of what all it could mean to us. At the same time, at home in my inbox, I was getting news from the Authors Guild and other writers organizations that there were issues that would negatively impact us as artists beyond the mere theft of our work. And, of course, there were the “I have an idea people” already plugging their prompts into ChatGPT and sharing their “poetry” and “fiction” with the world. Close on their heals were the grifters putting up tutorials on how to turn those ChatGPT stories and poems into quick money by sending them to places like Clarkesworld and other magazines with online submissions portals.
Since the arrival of ChatGPT, the company I work for has had a few missives and meetings about how to “properly” use it in the business context. There have even been some discussions, fleeting and so far un-repeated, about the legal issues. My company even created its own version of ChatGPT that is, supposedly, isolated to our network and draws only from content and IP the company owns[2]. In team meetings with my boss and others in the department where I work, there have been a few additional discussions. There was even a push from my boss’s boss for us to actively use AI and develop “use-cases” for it (god, I hate business speak). In the aftermath of all that, I still get regular questions from people wanting to know if AI can be used to develop operating procedures for the fertilizer plants and, I assume, relieve humans at the plants of the obligation to write, edit, and revise the plant’s operating procedures.
My answer is always some version of “No,” but they keep asking. I feel like I have to become an expert on generative AI just to be able to argue against it because no matter what, all these people with degrees in business, software development, and engineering see writing and communication as a nuisance task of no value—at least until they need someone else to do something. To them, writing clear instructions is a pain in the ass, while three hours of meetings talking in circular, meaningless business jargon actually gets things done. Part of the reason my coworkers don’t like to write is because they’re not good at it, and they’re not good at it because they don’t do it enough. You can see the vicious little circle. Add to that self-defeating cycle the fact that over half of Americans can’t read above a sixth grade level, which means they have problems comprehending multi-step instructions and complex sentences, and you have a business culture that relies on one person almost literally holding someone’s hand through the learning process and physically showing them how to click button A first, then button B, etc. These are the people pushing to be able to plug prompts into a program that they will expect to deliver instructions and communicate information until, one day, they won’t have the competency to evaluate its accuracy.
They seem incapable of grasping two basic notions. First, AI can’t make the intuitive leap to something new. It can only remix from what it has been fed, and it can only return that remix as an average of all the content it was fed. In the confined, limited model of material that exists within a company’s self-generated wholly owned content, you might be able to generate something useful, as long as it relates to things that already exist in the company. Once some new device or practice is introduced, new content, previously unknown to the self-contained system, will need to be added. That means a person—a person with advanced reading comprehension skills and writing skills—will need to create new instructions to feed into the company’s AI so that the functionally illiterate workers can punch in their rote queries to make variations on that original content. Second, writing clearly is a reflection of a person’s ability to think clearly and rationally and to organize their thoughts into a coherent and logical way. An inability to write clearly is a sign that someone cannot think clearly.
The old programming adage I learned in a junior high computer class in the mid-1980s has never been more relevant than it is now: garbage in, garbage out.
In the public sphere where OpenAI and other companies are pirating and stealing material from writers and scraping social media and other public facing websites for content, we end up with shit like that reading list of mostly non-existent books, or we end up with AI chatbots that quickly descend into racism and toxicity. And still, there are people out there who don’t see a problem. Somehow, we’re supposed to believe a system that’s been fed everything from racist lies to the most refined expressions of the human soul, is going to pick only the best of it all and turn it into something useful. That’s bullshit.
At work, I shared the news article about the fake summer reading list with someone who has, for the most part, been on my side when it comes to generative AI. The response I got back was shocking. While this person was appalled at the outcome and the laziness of the writer, they took the stance that AI itself wasn’t “bad.” It reminded me of the idiotic argument that “guns don’t kill people, people kill people.” AI isn’t bad, it’s the way people use it that’s bad.
My first response to that was that we’re doomed.
Now, I agree that there might, maybe, possibly, be some situation where AI could be wholly benevolent. Computer programs are not developed for the sole purpose of killing things the way guns are. Now, some might try to say that guns weren’t made just to kill people, they’re for hunting or home security . . . those are nice linguistic reframings, completely out of context with the modern world, but the sole purpose of a firearm, no matter what context you want to place around it, is to deliver a projectile into a living thing and end its life. In that regard, firearms are irredeemable. Yes, people kill people. They’ve been doing it for several millennia now with everything from their bare hands to sticks, stones, hammers, and forks. Guns just make it easier, and quicker, and unlike our hands, sticks, stones, hammers, or forks, guns have no isolated alternative uses like caressing a loved one, building a shelter, or feeding ourselves. A gun is used to kill, period. Even if the context for its use is confined to an 18th century homestead on the frontier where it’s needed to hunt game for the dinner table and fend off marauders of one type or another, that gun’s sole purpose is to kill.
Right now, generative AI is nothing but a theft machine. Companies like OpenAI have admitted as much in court claiming that if they had to pay the writers from whom they’ve stolen work to train their theft machines, they’d go out of business and the industry would be destroyed. If that’s so, then good. A business that relies on stealing from someone to exist, like AI, or a business model that relies on people’s deaths, like firearms or for-profit health insurance, ought to be put out of business.
The most common selling point of AI that these thievery companies employ is that businesses can replace humans in the workforce. Strange that the jobs being done by humans that the tech broligarchs want to replace with AI aren’t the jobs being done by subjugated and trafficked low-wage workers but the jobs done by workers whose skill sets threaten the techno-authoritarian regime the broligarchs want to build. Where’s the AI McDonalds robot to replace the so-called menial burger flippers? Where’s the AI janitor to replace the woefully underpaid custodian? Where’s the fully automated street sweeper? None of the AI we’ve been shown in the last decade is being applied in a way that would reduce the need for a permanent, desperate, oppressed underclass of people. It’s all being applied to eliminate lower and middle class workers or the struggling artist. It’s goal seems to be to reduce education to technical skill acquisition in the service of machine maintenance at the expense of intellectual enrichment and human development.
Capitalism inherently extracts wealth from the lower classes, but we’ve reached a point in America where the lower class isn’t big enough, or dependent enough, to continue supplying the wealthy with the desired annual growth rate. We aren’t dying and reproducing fast enough. We aren’t consuming the low quality, cheaply produced, disposable crap they make fast enough to satisfy their warped sense of “good profit.”
Sixty percent of Americans don’t earn enough to support themselves.
Fifty-four percent of Americans can’t read at a sixth-grade level.
Desperate people who can’t understand what’s happening to them are easy to control. Right now, AI is a tool that will only exacerbate that. No amount of good or moral use of the existing manifestation of AI will fix that.
And there’s no iteration, reinterpretation, or refinement of capitalism or capitalist thinking that will save us from the capitalist system we currently live under.
Put AI out of business. Save yourself.
[1] I still use the built-in spell checkers, but I end up rejecting most of its suggestions. The majority of the things I accept are adding commas (but only after rereading the sentence) and removing extra spaces.
[2] No, I’m not going to tell you who I work for. Some people in my writing world, and everyone in my private life knows who I work for, but I prefer to the keep a bright, sharp, distinct line between my life and the corporate, capitalist whoring I’m forced to engage in to pay for food, shelter, and medicine.
very informative, well written very proud of you