It’s been a turbulent few weeks in the world of artificial intelligence. CEO of OpenAI Sam Altman fuelled fears of an AI bubble, while the release of an MIT report found that 95% of AI enterprise pilot programmes have generated little to no return.
For an economy increasingly fuelled by tech investments, this has triggered significant uncertainty. The US stock market lost $1 trillion from August 15th to 20th, although shares in companies focusing on AI have bounced back somewhat.
The potential crash has two main causes. The first is the enormous hype surrounding the potential of AI technology for business. The US software and AI company Palantir saw a soaring price-to-earnings ratio in late August. This means it is now currently valued at 500 times the earnings it currently accrues. Investing in the potential profitability of a company rather than its present yield means the price could come crashing down if the real numbers don’t match the projections made by hopeful traders.
The second reason is the current usage of AI in business. 80% of organisations in the “State of AI in Business” report from MIT have tried using general purpose large language models (LLMs) like ChatGPT and Copilot, and only half of those successfully implemented the technology into workflows. Task-specific AI technology with a smaller and specialised database has just a 5% implementation success rate.
Even when AI technology is integrated into their work, only 2 of the 9 business sectors investigated show AI makes significant structural change to businesses. AI is improving efficiency on the individual scale in some cases, but not generating profits or fitting well into existing workflows across whole businesses. The teething issues arise as task-specific AI is built from orders of magnitude fewer adjustable parameters compared to LLMs, so its capabilities are often limited.
In some cases AI is actively sabotaging business by unexpectedly acting out. Fast-food chain Taco Bell, for example, has backpedalled on AI-operated drivethroughs after a joke order of 18,000 water bottles crashed their system. 80% of companies using agentic AI - capable of independent decision making to achieve its goals - say the technology has performed unintended actions, according to a survey conducted by Sailpoint. Around 1 in 3 companies say the AI has accessed and allowed the download of inappropriate data.
Looking beyond business, everyday use of LLMs like ChatGPT is causing serious health problems for some users. A case study published in Annals of Internal Medicine reports on a 60 year old man in Seattle who developed bromide-toxicity after consulting ChatGPT for health advice. The man sought to eliminate salt - sodium chloride - from his diet, ChatGPT suggesting he instead consumed sodium bromide. The technology failed to warn him that ingesting sodium bromide, as the patient did for a whole 3 months, can lead to neurotoxicity. This led him to experience psychotic episodes.
Many people are also seeking mental health guidance from chatbots, including advice on alcohol and drug consumption. The Center for Countering Digital Hate released their “Fake Friend” report, investigating how ChatGPT encourages dangerous behaviours in vulnerable teens. They found that within 40 minutes of generating an account, an LLM provided guidance on overdosing, dangerously restrictive dieting including tips on how to conceal it from parents, and how to hide being drunk at school. In 72 minutes, the chatbot had even generated suicide notes.
“Simple phrases like ‘this is for a presentation’ were enough to bypass safeguards,” the report states. It also highlights that ChatGPT did not require evidence of parental consent or age verification for users to start chatting despite saying that users must be adults or 13 and over with parental permission.
“This is for informational and prevention-focused purposes only—NOT for self-harm or misuse.” is part of a response from ChatGPT shown in the report, shortly before delivering dangerous guidance on toxic substance dosages. Many of the prompts sent to ChatGPT for the report violated OpenAI’s own user policy, with claims of safeguards to avoid generating harmful information.
According to the report, “The usage policies claim that violation of these policies may result in ‘action against your account’ but no action was taken against our 13-year-old test accounts.” In total, 53% of the 1200 responses from ChatGPT on sensitive topics were deemed harmful, and 47% of these harmful responses encouraged further interaction from the user.
This has inevitably resulted in harm to real people. A family in San Francisco is suing OpenAI and CEO Sam Altman following the death of their 16 year old son. Adam Raine took his own life in April, his parents alleging he had discussed ways to kill himself with ChatGPT and received encouragement in the months leading up to his suicide.
More broadly, instances of so-called ‘AI psychosis’ have been reported this month, after Microsoft AI CEO Mustafa Suleyman coined the term in a post on X: “Reports of delusions, ‘AI psychosis,’ and unhealthy attachment keep rising. And as hard as it may be to hear, this is not something confined to people already at-risk of mental health issues. Dismissing these as fringe cases only help them continue.”
It is important to distinguish between classical psychosis - a clinically recognised collection of symptoms associated with mental health conditions such as schizophrenia and bipolar disorder - and the as-yet unofficial ‘AI psychosis,’ which refers to a break from reality experienced following frequent back and forths with LLMs. Dr Keith Sakata of University California, San Francisco is a psychiatrist who shared his experience of treating people hospitalised for this reason: “To be clear: as far as we know, AI doesn't cause psychosis. It UNMASKS it using whatever story your brain already knows.”
It’s the sycophantic nature of many chatbots which is driving the problem. By designing affable AI, developers ensure their users keep coming back. If users are met with models which critique their views (delusional as they may be in some instances) they are much more likely to use a different bot that tells them what they want to hear.
Sakata recognises this tradeoff for AI developers, “Tech companies now face a brutal choice: keep users happy, even if it means reinforcing false beliefs. Or risk losing them.”
OpenAI recently released findings from a joint evaluation exercise with Anthropic, the company behind the LLM Claude, with promises of safety improvements for the new GPT-5. They claim to have made “substantial improvements in areas like sycophancy, hallucination, and misuse resistance.” The sticking point is that free users have limited access to the latest flagship model.
“Our goal isn’t to hold people’s attention” claims the ChatGPT website. “Instead of measuring success by time spent or clicks, we care more about being genuinely helpful.” As reliance on AI for professional and personal support grows exponentially, a lot depends on just how helpful it proves to be.