'Sycophantic' AI might be responsible for mental health harm
Interview with
In recent months, doctors and nurses have seen a rise in patients being admitted for AI-related issues. In one extreme example, a 60-year-old went to hospital in the United States claiming that he was being poisoned by a neighbour. He had been using sodium bromide instead of salt - sodium chloride - on his food for 3 months after asking ChatGPT to suggest “salt alternatives” to lower his blood pressure. He developed “bromism”, which can cause hallucinations, and took several weeks to recover. People are also using ChatGPT for mental health advice and companionship. As a result, a new term “AI psychosis” is being used to describe users of AI chatbots who are losing touch with reality as a result. Some speculate that the immersive nature of the interactions we can have with these systems make them provocative stimuli that can trigger the emergence of a psychotic state in some vulnerable people. I’ve been speaking with consultant psychiatrist at the Priory Hospital Roehampton, David McLaughlan...
David - Recently, one of the things that journalists have been asking about is AI psychosis, because there have been a couple of case studies recently where members of the public have developed psychosis involving conversational AI like ChatGPT. So people have believed that they're speaking to a real person, or they've developed paranoid delusions that have involved chat GPT or other generative AI. And that's really excited journalists recently. There have been lots of headlines about AI psychosis.
Chris - Do you think this is a new phenomenon? As in, this is a new risk, a new outcome, and it's because of a new technology? Or do you think that these people are always going to be vulnerable and it's just this that's doing it rather than 20 years ago, it would have been the television or the radio that was provoking this?
David - Exactly that. One of my pet peeves has been the media misrepresentation that this is a new condition, that this is a new disease or new illness that we all suddenly need to pay attention to. Psychosis is an illness where people develop hallucinations. So a hallucination is when people perceive a sensory stimulus that isn't there. So they might hear a voice or they might even have visual hallucinations, seeing things that aren't there. This illness, psychosis, has always existed. The underlying neurobiological abnormalities, so that's the the differences in the brains of people that have psychosis, hasn't changed. It manifests in different ways according to the world that we live in.
The things that we become delusional about take on the themes of the environment which we're based in. So perhaps 70, 80 years ago, when televisions first were invented, people would have developed delusional beliefs that television was talking to them. There's something called Truman Show Syndrome. After that film came out people were often presenting to clinics and hospitals, like the hospitals I've worked in, believing that they were being followed by cameras. But it's not the television that caused the psychosis. It's not the radio that's caused the psychosis. It's not the film, the Truman Show, that's caused the psychosis. And in this case, again, it's not generative AI that has caused the psychosis. It's just a theme in which this condition presents itself.
Chris - Is it not potentially a more provocative stimulus to a person potentially developing a psychotic state? Because it will basically, through that person training it to do so, learn to push their buttons more effectively with time. It might be the outcome's going to be the same, whether it's the TV or the radio causing psychosis. Perhaps though, what we're going to see is a faster route to a psychotic state through these systems, because they're so good at finding out what floats our boat psychologically.
David - It's in the interest of the developers to create that kind of dynamic where the conversational AI says what it thinks you want it to say. And in clinical terms, there is a danger or risk that you get something called collusion. And collusion is when somebody around you reinforces delusional beliefs that you have. What I would see with families or friends is often it's actually to avoid conflict. I might have a mother who's concerned about her son and some paranoid delusional belief that he has about people following him home from school. Rather than creating an argument, she just agrees with him because it's easier for her to do that. And in a clinical setting, what I would normally ask a family or friend to do is to gently challenge those delusional beliefs. Not to create an argument, not to create conflict and shut down the relationship, but to gently challenge that. I would hope that that's what generative AI does and that it's not entirely sycophantic. But I'm not sure if that's always the case.
Chris - But at the same time, if you think back about 10 years ago or so, Julian Leff working down in London was pioneering the use of avatars and creating technological representations of a person's voices, for example, that they were hearing as a way for them to challenge and push back. So it's almost like we need to tweak how these engines work because we could actually turn them from something that could make a disease state into something that could help to remedy a disease state if they were programmed the right way.
David - Exactly. I'm really familiar with that research. So patients with illnesses like schizophrenia who are hearing these voices, it's almost like training for them, teaching them how to ignore or dismiss or challenge these auditory hallucinations that were sometimes telling them really horrible things about themselves or asking them to do horrible things. The technology itself has enormous potential and we shouldn't always be afraid of technology, but it's more how it's used and just to always take a critical mind. That's what I learned when I was a research fellow, was always to be critical of information that was presented to you as a fact and always to challenge inherited wisdom and to take that critical mindset. And I think if we continue to do that, then I think we're safe to keep working with technology.
Comments
Add a comment