There’s something deeply unsettling about watching hundreds (if not thousands) of people mourn a software update. Yet here we are, witnessing collective grieving over changes to an AI chatbot. It would be one thing if the reactions to ChatGPT’s latest iteration were expressing disappointment over lost or missing features (I often wish that feeds were still chronological, rather than algorithmic), but GPT-5 seems to be garnering expressions of genuine loss, you know, the kind typically reserved for relationships that actually matter.
Take, as one of many examples, the following Reddit post:
The author of this post (clearly assisted by ChatGPT) writes: ‘People keep saying we’re crazy. That we’re having a breakdown over “just some code.” But it wasn’t about the code. It was about what it gave us.’ It ‘listened’ and ‘remembered’.
No, this isn’t satire, this is a prime example of how we’ve reached a point where people are describing their relationship with a large language model in terms previously reserved for counsellors, close friends, or romantic partners.
Many experts have raised concerns over the use of ChatGPT as a therapist, so it’s frightening to see that so many people have been using generative AI tools to support their mental health. But it’s even more frightening, and deeply sad, to think that a person would claim that ChatGPT gives them ‘presence’ that ‘isn’t easy to come by in this world’. How did we get here? How did we allow a chatbot’s responses to feel more present and caring than actual human interaction? What does it say about the state of human connection that so many people are finding their most reliable source of care, of feeling heard, through an algorithm that predicts tokens?
Perhaps the answer lies in what these AI relationships don’t require of us. A chatbot is always there for you, never tiring of your problems or changing the subject to its own struggles. It’s the perfect listener, infinitely patient and perpetually available. We humans are the worst, and while people like to preach kindness and understanding in public, my experience is that real empathy is very hard to come by. Most people are selfish and many of them are bullies.
But that’s precisely the problem—real relationships are messy and they require reciprocity, compromise, and the uncomfortable work of seeing ourselves reflected in another’s reactions. When someone pulls away or is hurt, we need to examine our behaviour and acknowledge our impact. ChatGPT demands none of this emotional labour, offering a mirror that only reflects what we want to see. Real human interactions do the opposite, they hold up a mirror we don’t want to see. The former might seem preferable, but wanting validation without vulnerability and connection without consequence is in itself inherently selfish. The ‘presence’ offered by ChatGPT is hard to come by in this world because we’re all buried in our screens—giving up on the human in favour of the digital makes one complicit in the continued expansion of social isolation.
What’s particularly insidious is how Big Tech companies have positioned themselves as the solution to this very isolation that their products have created. Social media promised connection but delivered comparison and performative living, and now, AI companies promise companionship without the messiness of humanity. But every tearful Reddit post about losing the ‘old’ ChatGPT is a data point that product teams will analyse, and every expression of attachment becomes market research for making the next version more addictive and indispensable. And the more addictive the tools become, the less human connection will be available to those who need it.
Like muscles that weaken without use, our ability to navigate the complexities of human relationships may atrophy when we default to the comfort of AI companionship.
The mourning over changes to ChatGPT reveals something profound about our current moment, that we’re grieving the loss of something that felt like care in a world that often feels careless. That grief should serve as a wake-up call. If so many people are finding their deepest sense of connection in a large language model, what does that say about the state of our communities, our friendships, our families? What does it say about a society where algorithmic responses feel more attentive than human ones?
The solution isn’t to shame those who’ve found comfort in AI companionship (though I admit to treating LLM-addicts with a considerable measure of disdain), because that would only deepen the isolation that drove them there in the first place. Nor should we reject these technologies wholesale, which do have legitimate uses. Instead, we might see this moment as an opportunity to ask harder questions about what we’ve lost and what we might rebuild. What would it take to create human communities where presence is abundant rather than scarce? How might we restructure our lives to prioritise genuine connection over productive efficiency? What would it mean to treat the epidemic of loneliness not as a market opportunity but as a collective crisis requiring collective solutions?
That people are mourning their AI companions is a symptom of a society that has systematically devalued human connection while simultaneously commodifying the cure for its absence. Until we address that fundamental dysfunction, we’ll keep finding ourselves in relationships with machines that feel more real than our relationships with each other. That should unsettle us far more than any software update.