Won't somebody please think of the AI?
Spare a moment to think of the poor, downtrodden artificial intelligence. According to a recently published report, AI systems could be ‘caused to suffer’ if consciousness achieved.
Yeah, and they might go lame if they ever grew legs.
We're so far into the land of theoreticals here that we might as well be discussing what we'd do if the Atlantic Ocean turned into a giant milkshake. The headline signature on the open letter is that of Stephen Fry. Now Sir Stephen may be a great writer, actor, comedian and all-round raconteur but I don't believe he is an expert on artificial intelligence.
But let's step back a little. In November 2022, OpenAI released ChatGPT into an unsuspecting world and, to use a technical term, all manner of shit hit all manner of fans. ChatGPT appeared to be almost sentient. You could ask it questions and get coherent answers. You could ask it to summarize large blocks of text and it would oblige in seconds. Ask it to write a poem in the style of Wordsworth on the topic of the 1980s Ford Fiesta range and that's what you'd get.
The media collectively lost their minds and started predicting that general AI on a par with human intellect was just around the corner and would make every worker on Earth redundant faster than you could say "clear your desk".
But ChatGPT, in reality, has all the intelligence of an Excel spreadsheet and is as close to a conscious entity as Clippy the Microsoft paperclip was in 1997. To paraphrase Samuel Johnson, ChaptGPT's display of apparent intelligence is "like a dog walking on his hind legs. It is not done well; but you are surprised to find it done at all.”
Large language models (LLMs) such as ChatGPT, China's DeepSeek and LLaMa from Facebook owner Meta, are essentially just great big tables of numbers. They're programmed, at great expense, by feeding in unimaginably huge amounts of text and allowing an algorithm to work out the relationships between different words. Those big tables of numbers hold "weights" which reflect the likelihood of what the next word will be given the context of the words which came before it. When your AI assistant seems to be answering your question all it is really doing is spitting out what the model tells it is the most statistically likely answer from the vast trove of documents it has seen previously.
The most recent iterations of LLMs have focused on something called "reasoning". These are designed to solve problems by taking a step-by-step approach; breaking the task down into smaller subtasks and reasoning about them in an apparently human way. But they are really still only regurgitating their training data. They're not going to come up with new approaches they've never seen before because that's just not what they do. No amount of anthropomorphising bits of code is going to turn them into real, thinking organisms.
The fascination with artificial consciousness is evident in the history of science fiction. Back in 1968, Arthur C Clarke and Stanley Kubrick's 2001: A Space Odyssey introduced us to the malevolent self-aware computer HAL9000. In 1983's WarGames Matthew Broderick saved the world from a nuclear apocalypse triggered by a supercomputer that could only be talked off the ledge with a nice game of tik-tak-toe. Not to mention the likes of Wall-E, Ex Machina and I, Robot.
In the academic world, a quick Google Scholar search led me to articles discussing artificial consciousness at least a quarter of a century back. Doubtless there would be other far earlier ones unearthed with a more comprehensive search.
So, let's go back to the recent report. The source for the news article is an open letter which itself refers to an academic article Principles for Responsible AI Consciousness Research (Butlin and Lappas, 2025). The first thing that jumped out at me was that the lead author is a philosopher, not a computer scientist or neuroscientist. I'm not trying to make an ad hominem attack here but it's apparent that we're going to be approaching the issue from a certain theoretical angle rather than a practical one.
Butlin and Lappas open with a sweeping statement that underpins the rest of their article, namely:
"Whether any AI system could be conscious is a matter of great uncertainty. However, if AI consciousness is possible, it may be near at hand."
The words "if" and "may be near" are doing a lot of heavy lifting. Just how likely is this? In my opinion, not very likely at all. Since the 1990s the great hopes of AI have been pinned on neural networks - software simulations of the neurons in mammalian brains and the electrical and chemical signals that flit between them. Remember those big tables of numerical weights I mentioned earlier? That's the distillation of the neural network, the artificial brain of ChatGPT.
Human beings have something like 80 billion neurons. The latest AI model to light the world up is DeepSeek with up to 671 billion parameters. With all those neural connections you'd think the AI could outwit any human but the truth is that generative AIs are being trained to do just one thing : associate words with other words. Yes, there are models that generate images and multimodal ones that can work in several different domains but my point is that they can only do what we train them to do. Anything that comes out is a function of what has gone in. Humans are infinitely adaptable. We can learn new skills. We can create new concepts and inventions that have never existed before. We can even reshape our own brains to recover functions lost to brain damage.
The rate of progress over the past few years with generative AI has led to speculation that we can keep going at the same rate and build up true thinking machines. But all we've managed to do is throw huge amounts of computing resource at the problem. Even if we could keep increasing processing capacity (and keep up with the obscene amounts of electricity and water consumed by the training process) we've now basically everything ever written by humans into these artificial brains. We're no longer constrained by compute capacity but by data availability.
Let's leave the final word to the authority on the subject, ChatGPT itself:
