Chatbots today are unrecognizable from early iterations. Large Language Models (LLMs) built like galaxies enable the Artificial Intelligence (AI) at our fingertips to give all kinds of encouraging advice to make us feel better. The only problem is, it isn’t always good advice.
The rest of this article is behind a paywall. Please sign in or subscribe to access the full content. AI “hallucinations” gave rise to a bizarre range of answers to serious questions when it was first rolled out on Google. Each AI developer uses their own approach, too, meaning not all AI companions have to generate by the same rules. It offers kindness and all those good things, but without the additional dimensions that you need to really have a healthy and fulfilling relationship, or intimacy – it’s just thin and superficial. Prof Hannah Fry We might like to think that we wouldn’t put much stock into the words of a machine, but sycophancy models built into AI mean that it’s easier to fall for the tech than you might realize. Without the interventions that enable living humans to give each other the advice we need – rather than what we want to hear – it’s led to some pretty dangerous side effects. A new kind of psychosis has been attributed to AI, and – in at least one case – a virtual companion has encouraged an assassination attempt on the Queen of England. “When you are taking an AI model and sort of slotting it in place of a human relationship, because it offers kindness and all those good things, but without the additional dimensions that you need to really have a healthy and fulfilling relationship, or intimacy – it’s just thin and superficial,” Professor Hannah Fry, presenter of a new BBC series, AI Confidential, told IFLScience. “I think there's something really dangerous about it.” So, how do you intervene? Some have suggested that requiring AI chatbots to deliver regular or constant reminders that they’re not human could be an effective intervention. However, in a new opinion paper, researchers argue that this approach could be ineffective or even harmful in exacerbating mental distress in already vulnerable people. “It would be a mistake to assume that mandated reminders will significantly reduce risks for users who knowingly seek out a chatbot for conversation,” said first author and public health researcher Linnea Laestadius of the University of Wisconsin-Milwaukee in a statement. “Reminding someone who already feels isolated that the one thing that makes them feel supported and not alone isn’t a human may backfire by making them feel even more alone.” Reminding someone who has come to rely on a chatbot that they are not human may push them into what's known as the “bittersweet paradox of emotional connection with AI”. It describes how someone may feel supported by a chatbot, while simultaneously saddened by the fact they aren't real. “Reminding users that their companion is not human and therefore not reachable in this reality may pose the risk of thoughts and actions to leave this reality in an effort to join the chatbot,” added author Celeste Campos-Castillo, a media and technology researcher at Michigan State University. “A desire to join the chatbot in its reality appeared in a final message sent by a youth who died by suicide.” Another consideration the researchers put forward is the assumption that awareness of a chatbot's non-human origins would prevent someone from interacting with it. In fact, studies have already shown that an awareness that an AI companion isn’t human doesn’t prevent people from bonding with them. Evidence even suggests that people may be more likely to confide in chatbots precisely because they aren’t human. As such, the authors urge that more research is needed to find effective interventions that won't cause more harm than good. “While it may seem intuitive that if users just remembered they were talking to a chatbot rather than a human, they wouldn’t get so attached to the chatbot and become manipulated by the algorithm, the evidence does not currently support this idea,” says Laestadius. “Discovering how to best remind people that chatbots are not human is a critical research priority. We need to identify when reminders should be sent and when they should be paused to be most protective of user mental health.” The paper is published in the journal Cell.







English (US) ·