As AI use grows, experts warn of risks to mental health and relationships
With AI becoming more embedded in daily life, building critical thinking skills and maintaining human connections may be key to ensuring it remains a helpful tool, rather than a harmful substitute, experts say on CNA’s Deep Dive podcast.
(Photo: iStock)
This audio is generated by an AI tool.
SINGAPORE: From helping with homework to offering emotional support, artificial intelligence is increasingly becoming more than just a tool.
For some, friendly chatbots are now a listening ear, a sounding board and even a trusted companion.
But as reliance on AI deepens, experts are raising concerns about where the line should be drawn.
INCREASED USE OF AI
Associate Professor Swapna Verma, chairman of the medical board at the Institute of Mental Health, has observed a growing trend in her clinical practice: Many young patients now arrive at therapy sessions having already consulted AI chatbots.
“I had a patient who asked me about a specific kind of therapy. She said this was ChatGPT’s advice,” Assoc Prof Swapna recounted on CNA’s Deep Dive podcast.
“And (the advice) was bang on, it was helpful.”
She said the appeal is clear. Chatbots offer round-the-clock access, allowing users to seek answers and express concerns instantly.
“I see (my patients) once every two or three months, whereas ChatGPT is available to them 24/7. It's the immediacy, the accessibility, that make it so enticing,” she noted.
However, she cautioned that AI could pose risks, particularly for vulnerable individuals when it comes to discussions about mental health.
“If a person has some specific vulnerabilities, they may not ask the right questions … and they end up getting wrong advice,” said Assoc Prof Swapna.
She highlighted a key limitation: AI does not always connect intent across separate queries.
For instance, when users indicate intentions of self-harm, AI chatbots will trigger responses of numbers to call for professional help or suggestions to de-escalate the situation such as reaching out to family and friends.
“But at the same time, if you ask what the tallest building in Singapore is, (chatbots) will (answer with) the tallest building. They won't connect that these two could be related,” Assoc Prof Swapna said.
AI SYCOPHANCY & RISKS FOR TEENS
Experts say the risks are especially pronounced for young users, whose cognitive abilities are still developing.
Those aged 12 to 18 are particularly vulnerable, as this is a critical period of brain development, said Assoc Prof Swapna.
“That’s the time when your brain is growing at speed and the way it forms connections is by learning, by perceiving the environment, by making connections. And you’re kind of disrupting that (by dependence on AI),” she said.
Associate Professor Jennifer Ang from the Singapore University of Social Sciences, who was on the same podcast, pointed to troubling reports overseas involving teenagers influenced by AI.
“The AI companion is supposed to be very affirming. So, if you have self-destructive thoughts … the AI companion (could) reaffirm some of these thoughts as valid,” she said.
While such cases have not surfaced widely in Singapore, she cautioned against complacency.
A central concern is how AI interacts with users emotionally. Unlike human relationships, AI companions are often designed to affirm rather than challenge.
“It's agreeable to everything. It affirms in a very empathic way. (Such) platforms are tailored to draw you in more and more,” said Assoc Prof Swapna, referring to AI sycophancy, a term used to describe AI models’ tendency to excessively validate users.
This contrasts with established approaches like cognitive behavioural therapy, which emphasise questioning negative thought patterns.
Assoc Prof Ang said: “If you're affirming everything, then it's not therapy. It's just offering very superficial support at most. This (AI) friend that is always on your side may not exactly be (offering) the best advice.”
IMPACT ON THINKING AND RELATIONSHIP
Beyond mental health, experts are also concerned about the impact on independent thinking.
Assoc Prof Ang noted that students are increasingly relying on AI to complete tasks without fully understanding the material, describing this as “cognitive offloading”, where users outsource thinking to AI.
“They know very little about what actually (goes into) writing their essays, what they can remember (from) particular sources, how they evaluate and judge,” she said.
She added that over dependence on AI may cause students to struggle to assess the reliability of information.
“Children ought to (learn) the skills to know where to look for reliable information … make (their) own evaluation and judgment about the (credibility of) sources,” she said.
There are also concerns about how AI may shape expectations of human relationships.
People who rely on AI for companionship may become less willing to invest time and effort in real-world connections, experts said.
“Rather than the traditional idea of what friendship looks like, (they could) start thinking … it’s maybe better talking to an AI friend that doesn't disagree with me, goes everywhere with me, and is readily available, unlike a human friend,” said Assoc Prof Ang.
This could reduce patience and willingness to nurture relationships with others, she added.
IMPACT ON SOCIETY
At a broader level, both experts highlighted the need for greater awareness, education and AI literacy.
“AI is not just mere technology – it's not a vacuum cleaner. We are not in full control of AI at times, and we have already allowed AI to control a lot of our lives. That recognition that we’re not … always in control … is a humbling fact that we need to learn,” said Assoc Prof Ang.
They also raised concerns about data privacy and how personal information shared with AI could be misused.
But rather than outright rejecting AI, both experts emphasised responsible use.
Assoc Prof Ang suggested encouraging children to seek information from multiple sources, including textbooks, teachers and peers.
Assoc Prof Swapna urged parents to watch for signs of over-reliance, such as children isolating themselves from real-world interactions and turning instead to devices.
“Have conversations with your child instead of ignoring those signs. Social connectedness is very important,” she said.
“These are skills that children need to learn for later in life: how to connect with people and have conversations. If that becomes derailed, then that is a concern.”
Back in her consultation room, Assoc Prof Swapna also uses AI discussions as a teaching moment with her patients.
“I use the opportunity to educate them about AI, some of the perils, what to look out for, and how to use AI more ethically,” she said.
“AI is based on available data. So, (there may be) a lot of Western slant, because the published data is predominantly from the West, so they may not get the cultural nuances, etc.”
Ultimately, the experts agree that AI is here to stay, but its impact depends on how it is used.
“Society has a role to play. It’s important for people to understand … the possible harms that AI can cause,” said Assoc Prof Ang, adding that over-reliance on AI for help can cause people to become more independent but lonely.
“We are social creatures. It's okay to depend on others sometimes, to ask for help, rather than to be completely self-sufficient – that's when loneliness and isolation come about.”