P-AI-perback Writer
Artificial intelligence can write some plausible sounding stuff. But can it have an original idea?
A couple of days ago I started drafting a post about AI and creativity; about whether artificial intelligence could be truly creative. Before I'd had a chance to finish writing it, Sam Altman, CEO of OpenAI, announced that his company has developed a new AI model that is “good at creative writing”. I take this as proof that AI may not replace writers but it certainly has developed a sense of irony about forcing rewrites.
By way of demonstration, Altman posted an AI-generated piece of "metafiction" in which the large language model muses on the subject of grief. Acclaimed Whitbread prize-winning author Jeanette Winterson wrote that the piece is "beautiful and moving" and that it demonstrates reflection on the part of the model in regards to its own limitations. She seems to identify an almost literal ghost in the machine having a distinct consciousness of its own.
Artificial intelligence can write some plausible sounding stuff. It can translate languages. It can handle a back and forth conversation convincingly enough that it can probably pass the Turing Test. But can it have an original idea? I'm not going to try to answer that question because I'd end up with a load of reply guys going "well, actually..." and calling me an idiot for not doing my own research on one obscure paper they've misunderstood or taken out of context. Instead I'm going to ponder on what's actually involved in original creativity and what an AI would need to do in order to demonstrate it.
So let's start by asking what is "creativity"? Wikipedia has about 11,000 words on the subject but the first sentence is probably enough to get us started : "Creativity is the ability to form novel and valuable ideas or works using one's imagination". Concepts such as novelty and imagination are hard to reconcile with the current generation of "artificial intelligence" large language models. We know that these models are trained on vast amounts of text, probably almost all the text ever published by humanity. They are then used to predict a plausible sequence of words that follow a given prompt. Can something novel come from something that can only look backwards at what has previously been created? Could a statistical model of language consider that no existing word is adequate for describing an emotion and thus coin a new phrase, simile or metaphor to convey its meaning?
The human mind receives stimuli constantly. Our senses feed in sights, sounds, textures and tastes from the physical world around us. Our imagination and our memories allow us to evoke those stimuli again even when they are not physically present. Imagine for a moment the feeling of warm sun on your skin, the scent of fresh cut grass on a summer afternoon, the taste of a sweet strawberry dipped in luxurious whipped cream. Try to describe those sensations and your mind, your imagination, will be pulling together your experiences, your sensory inputs, your knowledge of the world and of language to create something unique to yourself whilst being familiar to anyone with a similar lived experience. Artificial intelligence doesn't have any of these experiences. It only knows how other people have described them. Expecting it to produce something original based only on secondhand knowledge would be like me writing a travelogue about Thailand based only on the third season of White Lotus and some Wikipedia articles about Bangkok.
Jeanette Winterson's debut novel Oranges Are Not The Only Fruit was based on her own experiences of growing up gay in a Pentecostal Christian community. An AI would never have that kind of experience and if asked to produce a story on such a topic it would, ironically, be drawing on Winterson's own writing which was almost certainly in its training data.
The new creative writing AI which Altman has touted is still locked up in OpenAI's labs. All of we've seen of it is the single piece of metafiction that he published along with his announcement. Without any further information or the opportunity to use the model we only have that one example on which to judge it. I've mentioned before that these apparent groundbreaking innovations are likely to be cherry-picked examples that could have been fished from a pool of utter dross. Until other researchers are able to replicate the experiment we won't know whether there has been some genuine breakthrough or if this is more AI hype. But when we do get to try it out for ourselves we're first going to have to figure out what we even mean by "creative writing".