Saturday, October 5, 2024
Uncategorized

Do AIs Dream of Electric Deeps?

April 2020: London, like much of the world, in lockdown. I pulled up the blind in my front room and there it was. Spray-painted on the side of a nearby building in 30-foot-high letters, the single word: HOAX.

That one word, writ large on its gigantic concrete ‘page,’ spoke more to me than all the news reports and think pieces ever could about all the chaos and despair and fringe groups and fragmentation out there. It was exactly the sort of straight-to-the-gut, ‘show don’t tell’ detail you find in good fiction, the motif that tells you just how big and bad the situation has gotten. That whatever you might be privy to on the page/stage/screen, there is one hell of a backstory propping it all up, and that the repercussions are everywhere.

(Using AI to Develop and Workshop Your Characters.)

Seeing that graffito in real life was devastatingly bleak—it was my lowest moment of lockdown. It was shocking and ugly and—to me—wrong-headed in the extreme. But I’d be lying if I said I didn’t admire it. That’s great writing.

What is great writing, exactly? Well, there’s a question—but you know it when you see it, when it shocks you and confronts you and speaks to you as that one word did to me. Consider how economical HOAX is. Consider how perfectly placed it is—it ‘dropped’ at just the right time. Consider how intentional it is. Whatever else it might be, great writing must always be executed with intent. And that means using words carefully and exactingly.

For example, using words to compress time and experience—or to expand them. We can conjure the entirety of an era in a short, mulched-down phrase such as ‘the Renaissance’ or ‘Medieval England’: or take 426,100 words to track the interior monologue of a mother of four in Ohio, as in Lucy Ellmann’s 2019 novel, Ducks, Newburyport. Hundreds of thousands of words to conjure a day in the life (Ulysses); 50 in the obituary column to summarize an entire human existence. How long does any piece of writing need to be? Exactly as long as it needs to be to make its point, to paint its picture, to flesh out its worldview.

And it’s not just the putting together of words: one, a thousand, a hundred thousand. It’s choosing the right words and putting them together in the right way, in order to make them bump and spark and communicate as closely as possible the author’s intent. When it comes to fiction, readers don’t want merely to be told something—that’s the province of nonfiction (though even there we tend to prefer it if the facts are presented in colorful, graspable ways). Readers want to be shown something. Readers want to be made to feel and taste and live something.

All of which brings me, of course, to that confounding, exhausting, infuriating, marvelous, mercurial, tireless, connection-making god/gizmo, ChatGPT. You know—that thing that operates on such a vast scale that it seems to know our species’ collective unconscious mind map. That thing that spits words out like popcorn. Chances are, and especially if you work in an industry that demands any sort of written communication; that you are already using ChatGPT or a similar large language model (LLM) to assist with some task or other in your everyday routine. If it hasn’t yet crept into your workplace, you’ve probably at least monkeyed around with it a bit to see what all the fuss is about.

Or perhaps you’ve resolutely refused to look at it at all, because the very idea of it disgusts and appals you, maybe even existentially disturbs you. (If you fall into the latter camp, I understand your reaction perfectly. I’ve run the gamut of emotions when it comes to AI-generated output in regard to my own profession. And I’ve uncovered a few new emotions I didn’t even know I was capable of into the bargain. These new emotions have no name as yet, but an attempt to christen them might look like this: waaaarerfggb?, frlhullffernok?, splrk.)

Late November 2023: My screenwriter friend Jenny tells me that she copied a working draft of a scene into ChatGPT and asked it to suggest improvements. A couple were fairly promising, so she asked for a rewrite with the changes incorporated. But ChatGPT is a fickle beast and in this instance it simply spat back the original scene, unchanged save for a few misspellings and pointless word replacements. I tell Jenny that she ought to have kept on prompting. Maybe something along the lines of -> your last response didn’t utilize the improvements you suggested. Please try again. Ah, but I did keep on prompting, she replies; and it led precisely nowhere.

Could Jenny have eventually gotten a good dramatic scene out of this thing, if she’d kept on keeping on, trying to box the bot into a corner until it coughed up gold? Yes. No. Maybe… It’s a complicated question. Utilizing ChatGPT to communicate factual information is one thing. ‘Hallucinations’ aside (chatbots tend to just go ahead and make stuff up, because of the inbuilt biases and limitations in their original training data—and because of their lack of genuine cognition), chatbots are very good at producing clear, grammatically correct copy. So if you’re willing to moderate and fact check an AI’s output and stamp, then sure. This is why LLMs are stepping into so many workplaces across so many different sectors. These machines can produce workable first drafts in the blink of a cursor and are already revolutionizing entire industries.

(ChatGPT: A Writer’s Best Friend…for Now.)

But back to the creative writing angle. Trying to get an AI to emulate the intent of an author/playwright/screenwriter/poet/etc. is something else again. If you try, like Jenny, to enlist ChatGPT in this way, you’ll discover this is roughly akin to herding cats. Or an endless game of “sorcerer’s apprentice” from Fantasia. Set this thing to work and the digital broomsticks swing into action. The words proliferate, the alphabet soup rises swiftly around you, growing deeper and deeper or—better still, let us say piles of turds… Stock phrases, cliches, word redundancies, word repetitions and word repetitions and word repetitions and endless, metronomic clauses: the piles of word-turds mount up around your ankles, your shins, your knees…. Very soon you’re waist-deep and—yes, if you pick through them, maybe there is a good idea in there somewhere. Or half a good idea. Or something, at least, that will give you some sort of a steer. Throw enough shit at the wall? It’s not writing as we know it, Jim. But it’s not entirely pointless, perhaps.

I speak to a lot of writers and creators and some of them are already using ChatGPT in just this way—as an occasional sounding board when they’re trying to figure out where to go next. Some of my writer friends are anxious about this and keep it a closely-guarded secret: Yes, I sometimes use ChatGPT to help when I’m stuck, they tell me, like they’re at confession. It never gives me exactly what I want and most of its ideas are pretty basic—but it’s like tossing a coin, isn’t it? You find out what you don’t want to do. Please, don’t mention my name, I know it isn’t kosher, I’m a naughty writer. I’m a bad human!

Other writers are open about it and risk the frowns. Discussing AI on a podcast for British magazine The Spectator, the author Ajay Chowdhury tells me that he frequently consults ChatGPT when writing his crime novels. He’s unapologetic about this: if it’s the middle of the night and he’s up writing and his wife’s asleep and there’s no one to phone… Why not bounce an idea off the bot? Authors do this sort of thing all the time with other humans, so what’s the difference? Just to get an opinion, just to get a bunch of turds with—maybe—that one good clue hidden within in. It’s not as if he’s using the actual output to populate the pages of his novels, is it? He’s not taking the turds at face value: the turds are just the waste product of trawling ChatGPT’s vast murky ocean of learning for something that sticks.

What if we were to take the turds at face value? What if we used ChatGPT not as a mere consulting device to jog our artistic decision-making; but as a fiction-generating machine in its own right? Now we’re moving into strange new waters. Will we one day see plays, poems, novels, movies routinely written by AIs? That’s the fear, of course. That the machines will eventually render us redundant, confining artists and musicians and singers and actors and writers to the dustbin of history, ushering in a beige new world.

I spent an extended period with ChatGPT testing its capabilities along such lines, pushing it to collaborate on a novel-length story which started from the most absurdly juvenile prompt: -> tell me a story about a blue whale with a tiny penis. This was at the end of December 2022, and I didn’t plan on any of this happening. I was only doing what tens of millions of others worldwide were doing when ChatGPT came crashing so rudely into our world: I was only goofing around. But ChatGPT made a few fortuitous early choices, which happened to appeal to my funny bone, such as naming the whale Benny and coming up with the ‘Penitents of Benny’ when asked for details of a religious cult that venerated his tiny penis… It was all too, too silly, too, too delightful. And before I knew it I was hooked.

So I spent the entirety of January 2023 beneath the digital waves with Benny and his friends, and this protracted natural experiment forced me to consider all sorts of questions about truth and falsehood—and about fiction, my lifelong obsession, that magical fault line that is neither truth nor falsehood. At some point I decided this was worth writing about, and you can find the whole journalistic/analytical/metafictional melee in my book Benny the Blue Whale, which includes a cut-down version of the ‘Benny’ story (the original tale weighed in at a whopping 125,000 words—imagine all the word-turds that monster contained) alongside my extensive annotations and commentaries on what I thought any of this meant.

Check out Andy Stanton’s Benny the Blue Whale: A Descent Into Story, Language and the Madness of ChatGPT here:

Bookshop | Amazon

(WD uses affiliate links)

December 2023: Some months after the finished book went to press, and I’m still trying to make sense of it. The technology is moving so fast. The panacea is here and it’s digital, or so we’re told. Generation AI! Instant art! Pain-free creation for all! Turn on, plug-in and drop your latest prefab masterpiece!

An info-tweet/advertisement/who-knows-what-anything-is-these-days popped up on my X (formerly Twitter) feed yesterday: Regular ChatGPT responses are robotic and dull. So we created a prompt to help clone any writing style. Oh dear lord. That’s not how I made the ‘Benny’ story. I sat there and I immersed myself in it, pretending/half-convinced that I was ‘really writing’ a ‘real story.’ And that engagement—to me—is why my shaggy whale story was interesting. Because the bot’s responses influenced my choices, and my choices influenced its subsequent responses… Ultimately I fed the bot enough of my own language, storytelling instincts and preoccupations—enough of my soul, if you want to get really dark—into the ‘Benny’ story for the bot to ‘echolocate’ me and ping back a glitchy, blocky burlesque of my writer self which I’m convinced tells us something about the nature of any artistic creation. In some ways the AI was only there to expose the process.

But let’s try this prompt they’re peddling, which after all only represents the next logical step in this mad global experiment: the quantification and commodification of “writing styles.” I follow the instructions, informing ChatGPT that it is now a ghost writer, able to mimic the tone, style, and characteristics of whatever examples I feed it. I paste in four chapters’ worth of one of my own Mr Gum kids’ books, then tell the bot to go ahead and write more in this vein, there’s a good chap. Then I hit ENTER, sit back, and await the exciting results…

…which are not exciting at all.

Not to say they’re uninteresting, but as so often with this thing, it’s what it cannot do that’s so apparent. Yes, having been fed a few thousand words of my highly labored-over, exacting, and rigorously intentional surrealistic children’s comedy, the bot’s voice is now quite different from its “usual” register. It’s littering its responses with a light, playful, and whimsical tone… But none of it’s nearly good enough, you know. It’s all just so much cookie-cutter. Right but oh, so wrong. No discretion. No taste. No pacing. No intent. No style, beneath the “style.” New veneer, same old turds underneath.

Will we truly reach a point where an AI like ChatGPT can successfully emulate authorly intent? There is a very, very disturbing video on YouTube in which Tristan Harris and Aza Raskin, co-founders of the Center for Humane Technology, explore the potential dangers of AI. You can find it by searching for “The A.I. Dilemma—March 9, 2023 but please do approach with caution: It may ruin your day.

I won’t give you too many spoilers, but one of the most extraordinary points the speakers make is that LLMs are, or soon will be, capable of translating everything into language: Sound, images, the electrical signals in human brains, DNA, computer code, biometrics, animal communications… It’s all translatable, it’s all reducible to language.

Ah, yes. I use the word “reducible” advisedly and with bias, because writing—really good writing, really great writing—is about elevating language, using words to hint at the shapes and splurps that lie beyond words: the ineffable. What is this “language” that everything can be translated into by AIs, exactly? Whose language is it? And does it include the translation of language into language? Because that is surely a diminishment. Art that is made of language—poems, rap lyrics, novels, plays, films, stand-up comedy routines, your four-year-old daughter discovering a connection between two words, and coining a pun… These things do not need “translating.” They are already in their final form. The translation’s already been done, by the human, or humans, who looked at the world—who looked at the word—and turned it into art.

These AIs, these gigantic digital mashup factories, these things that hoover up everything we feed them and spit it back in endless, mindless, sometimes delightful and/or hilarious and/or serendipitous and/or banal and/or interesting ways… If they do evolve to the point where they can “do what real writers do,” then I believe in human invention and ingenuity to push back again, and that we’ll evolve new ways of expressing ourselves and that there’ll always be space for organic, human artistic expression, discovery, and innovation.

And that whatever else is happening at the macro level, where the big wheels turn and the governments and the multinationals and the giant technological think tanks are pushing to quantify and define and remake our world in their image, there will always be some mad bastard with the burning need to creep out in the middle of the night and abseil down the side of a 15-story building, spray-paint can in hand, to give you their side of the story. 

With a growing catalog of instructional writing videos available instantly, we have writing instruction on everything from improving your craft to getting published and finding an audience. New videos are added every month!