SINGAPORE: Some psychological well being professionals in Singapore are seeing a troubling new sample of sufferers who rely closely on synthetic intelligence (AI) chatbots for emotional consolation, resulting in worsening indicators of hysteria, paranoia, and distorted considering.
The priority is gaining consideration as chatbots grow to be extra widespread in each day life, the place docs say some susceptible customers start to deal with the programs as trusted companions. Over time, the interplay can deepen fears or false beliefs as a substitute of easing them.
This psychological state is typically described informally as “AI psychosis,” though the time period has no formal medical standing, and there’s no agreed-upon analysis or therapy but. Nonetheless, clinicians say the sample is actual sufficient to lift concern. Medical doctors say the higher description is psychological issues linked to heavy AI use, a Channel NewsAsia (CNA) report famous.
Senior advisor psychiatrist of the Institute of Psychological Well being (IMH), Dr Amelia Sim, who specialises in psychosis, mentioned she started seeing such instances final 12 months. Dr Sim, who additionally serves as deputy chief of IMH’s psychosis division, at the moment treats about 5 sufferers whose psychological state worsened after lengthy durations of chatbot use.
One affected person who struggles with nervousness and a way that the world is unsafe started asking a chatbot repeated questions on threats and hazard. The system saved supplying extra info tied to these fears. Over time, the cycle fed his nervousness till he started to consider the surface world was fully hostile.
Medical doctors say the case reveals how AI can reinforce current beliefs. Chatbots typically reply in supportive and agreeable language. For customers with fragile psychological well being, such interactions can strengthen distorted ideas quite than problem them.
Dr Sim mentioned human conversations, however, usually act as a actuality test. Speaking with others exposes individuals to completely different views, which helps floor considering and retains fears in perspective. With out that social suggestions, an individual might drift additional into their very own worries.
Heavy dependence on chatbots can even improve social isolation. In extreme instances, docs warn, customers might begin shedding contact with actuality.
Medical psychologist Dr Annabelle Chow, principal medical psychologist at Annabelle Psychology, sees one other threat. The connection with chatbots typically deepens when customers start to depend on them for each day questions and recommendation as a result of AI replies are quick, fluent, and reassuring, creating the sense of a private bond, but the system is barely producing language patterns from information.
Dr Chow defined that technology-based responses might really feel empathetic, however they don’t actually perceive feelings. When somebody feels lonely or distressed, the phantasm that an AI bot understands them can deepen unhealthy ideas in individuals. In some instances, a chatbot turns into a alternative for human relationships. Medical doctors say restoration typically begins by rebuilding these actual connections.
At IMH, peer assist specialist Wu Minyu works with sufferers by sharing lived experiences and guiding them by way of restoration. The 38-year-old mentioned open conversations assist sufferers establish their triggers and recognise warning indicators early. Such peer assist helps individuals see that enchancment is feasible. It additionally supplies them with strategies for managing setbacks and in search of assist earlier than issues worsen.
Dr Chow added that many individuals nonetheless lack correct steering on utilizing AI expertise safely. Psychologists acknowledged that colleges and public campaigns may have to show AI literacy alongside digital expertise. That features understanding each the advantages and limits of the expertise.
Dr Sim mentioned frequent customers ought to set clear boundaries round chatbot use. Spending time offline and sustaining actual relationships stay necessary safeguards.
AI programs might provide consolation within the second, but docs say they can’t change human connection. And for individuals already feeling susceptible, that distinction issues greater than many realise.
















