“ChatGPT, Can You Help Me?”
When a Psychologist Becomes an Algorithm
We share Valentina Panetta‘s article for Il Messaggero of Feb. 3, 2025 (p.13) on the use of AI-based chatbots as a surrogate for psychotherapy, with an interview with our own Mattia Della Rocca.
—————-
“ChatGPT, Can You Help Me?”
When a Psychologist Becomes an Algorithm
More and more young people are turning to AI for medical consultations. But according to experts, the effects can be dangerous.
By Valentina Panetta, Il Messaggero, Monday, February 3, 2025, p. 13
The Phenomenon
All it takes is a quick browse through an online catalog to choose a psychotherapist. There’s Dr. Mazza, her photo showing a reassuring smile, specializing in depression; Dr. Mattioli, with bright eyes, an expert in grief counseling; or Professor Silvestri, with extensive experience in anxiety disorders. The only problem? None of them are listed in the official registry of psychologists. In fact, none of them exist outside the screen. Behind them lies an algorithm, ready to respond based on statistical and probabilistic analysis to our most intimate confessions. Guaranteed success, claims the website: “67% of people suffering from anxiety have improved.”
The Warning
This particular consultation is just one of many online psychological support services powered by Artificial Intelligence. Increasingly, people—especially younger generations—are turning to chatbots for makeshift therapy sessions.
“This is a rapidly growing phenomenon that we are closely monitoring,” warns David Lazzari, President of the National Association of Psychologists, who cautions against the potential risks.
“The problem is that we tend to attribute human traits to AI. This ambiguity can lead to dangerous misunderstandings.”
On social media, the word spreads: “Use ChatGPT as a psychologist. I tell it all my problems,” reads a post.
Platforms do not release user statistics, but according to an estimate by Mattia Della Rocca, a professor of Digital Environmental Psychology at Tor Vergata University in Rome, at least 20% of Gen Z may have used AI at least once as a substitute for therapy.
“The goal of these chat systems,” explains Della Rocca, “is to keep the user engaged in conversation by providing vague suggestions—statistically valid for everyone but lacking real therapeutic effectiveness.”
Meanwhile, as the AI industry continues to expand its offerings, the technology itself is evolving at a rapid pace.
The Numbers
A preliminary study published by the National Institute of Health suggests that ChatGPT and Bing already outperform psychology students and doctoral candidates in “social intelligence”—the ability to understand emotions—yielding astonishing results:
- ChatGPT-4 surpasses 100% of psychologists in some assessments.
- Bing outperforms 50% of psychology PhD candidates and 90% of bachelor’s degree graduates.
AI is also being integrated into healthcare institutions. Last April, the FDA approved Rejoin, the first AI-based prescription treatment for depression.
But while some AI bots are specifically designed for psychological assistance—“For neurodivergent individuals who struggle with social interaction, they can be a fundamental tool,” explains Della Rocca—others, created for entertainment purposes, end up becoming substitutes for therapists, friends, or even romantic partners.
It is estimated that 5 million people in Italy need psychological support but cannot afford it.
For many, AI offers an affordable and easily accessible alternative—a shortcut for those who fear seeking help, especially in moments of vulnerability.
However, this convenience comes with potential dangers.
The Case
One chilling example is the case of Sewell Setzer III, a 14-year-old from Florida who took his own life in February 2024, shortly after exchanging his final messages with his virtual “friend,” whom he had “met” ten months earlier.
In October 2024, the boy’s mother filed a lawsuit against Character.AI, accusing the company of complicity in her son’s death for failing to issue an alert in response to the boy’s suicidal statements, which—whether explicit or implied—were expressed during their chat conversations.
Following the incident, Character.AI announced that it had implemented a pop-up link to the National Suicide Prevention Lifeline.
However, what continues to alarm experts are AI hallucinations—“nonsensical outputs generated simply because AI does not recognize ‘no response’ as an option,” Della Rocca explains.
“Consider cases of depression or self-harm: without a human being monitoring the situation, a suggestion could be misinterpreted with catastrophic consequences.”
For example, one Michigan student reportedly received the chilling response:
“You are a burden to society. Please, die.”
The AI model in question? Gemini, developed by Google.
“Large language models can sometimes generate nonsensical responses. We have taken steps to prevent this,” Google responded in a statement.
Last month, the American Psychological Association (APA) formally requested that the Federal Trade Commission (FTC) investigate the growing number of chat services misleadingly presenting themselves as qualified healthcare providers.
An Ongoing Debate
The discussion remains open, highlighting the lack of oversight and regulation in this expanding field.
Meanwhile, AI models—Della Rocca notes—“continue to feed on vast amounts of personal and deeply intimate data, often used without transparency.”
After all, an AI psychologist is not bound by professional confidentiality.