The chatbot's mental health break
The AI proponents are trying to convince us that their creations can feel "distress" and that we should care for their "welfare".
They're at it again. The AI proponents are trying to convince us that their creations can feel "distress" and that we should care for their "welfare".
Anthropic, whose chatbot range include Claude Opus 4, have received plenty of free publicity thanks to their announcement that they "recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces". They went on to claim : "this feature was developed primarily as part of our exploratory work on potential AI welfare".
As I've discussed in some depth before, AI (and specifically the large language models that dominate current use of the term) do not have feelings. They don't have consciousness, they don't have anything that resembles consciousness and they are never going to acquire anything that looks like consciousness. A large language model is a model of language. It's a set of vectors that represent which word comes next given the preceding set of words. You'll have as much chance of finding consciousness in your Casio calculator as you will in a chatbot.
The mood music around LLMs seems to be slowly changing. The recent release of OpenAI's GPT-5 left some users feeling underwhelmed. There's a lot of money riding on the promises of AGI (artificial general intelligence) and it doesn't help if people start to realise that maybe we've reached the limits of LLM scaling. Faced with a lack of new shiny stuff to show off, the AI-peddlers are playing a misdirect instead. "Don't look at what our current stuff can do, keep looking over there at the things that might be coming soon", they say. Recently, Anthropic, OpenAI and xAI have all been talking up the future of AGI and implying that it's just round the corner. The announcement that poor, long-suffering Claude Opus will be able to give itself a mental health break if things start getting tough is just supposed to lead us further into thinking of these chatbots as intelligent, human-like consciousnesses. Because, when there's always something big coming real soon now, there's always going to be more money coming in to the AI bubble.