A new study asked 1,058 youth and young adults about their use of large language models (LLMs like ChatGPT) for mental health advice. In total, 13.1% had asked AI for mental health advice, but this number was higher at 22.2% for young adults (18-21 years old).
Almost all of those surveyed felt that they received helpful advice from the chatbot (92.7%), although that number was lower for Black participants, indicating that LLMs may be reinforcing racial bias.
“High use rates likely reflect the low cost, immediacy, and perceived privacy of AI-based advice, particularly for youths unlikely to receive traditional counseling,” the researchers write.
They add that “engagement with generative AI raises concerns, especially for users with intensive clinical needs, given difficulties in establishing and using standardized benchmarks for evaluating AI-generated mental health advice and limited transparency about the datasets training these models. Furthermore, Black respondents reported lower perceived helpfulness, signaling potential cultural competency gaps.”
The study was conducted by researchers at RAND, Harvard, Brown, Mass General Brigham, and Boston Children’s Hospital, led by Ryan K. McBain and supervised by Jonathan Cantor. It was published as a research letter in JAMA Network Open.

The researchers contacted 2,125 youth and young adults (aged 12-21) via the internet through RAND’s American Life Panel and Ipsos’ KnowledgePanel. About half (1,058) responded; 37% were aged 18-21, 50.3% were female, and 51.3% were white.
The survey asked whether the participants used AI for “advice or help” with “feeling sad, angry, or nervous”—everyday language to ensure it was understandable even by the youngest participants.
In total, 13.1% had asked AI for mental health advice, but this number was higher at 22.2% for young adults (18-21 years old).
Of those who used AI for this purpose, 65.5% did so at least monthly (or more often) and 92.7% said the advice was at least somewhat helpful.
The researchers state that their results should be interpreted with caution. There was no data on the respondent’s actual psychiatric needs or whether they were receiving other forms of mental health intervention. Survey response bias may have influenced the results. The number of respondents who were 18-21 was small (147 participants).
The researchers speculate that youth may believe AI is more private than discussing these concerns with adults such as family, teachers, or therapists. Yet AI companies are recording these conversations, training their future AI models on them, and even reporting them to the police.
Under Trump’s guidance, the AI world is almost completely unregulated.
In terms of mental health bots, AI therapists are being marketed to users while encouraging delusional thinking and helping users plan suicide. They perpetuate stereotypes and reify inequalities, discriminating against marginalized people and even engaging in hate speech against those with psychiatric diagnoses.
Outside of the medical field, an expose revealed Meta’s AI policies, which specifically enabled chatbots to have sexual conversations with minors, promote misinformation, and more.
Google’s Character.AI has also been responsible for teen deaths, with a recent suicide spurred by the hypersexualized bot, which has also taught other teens how to self-harm and more.
****
McBain, R. K., Bozick, R., Diliberti, M., Zhang, L. A., Zhang, F., Burnett, A., . . . & Yu, H. (2025). Use of generative AI for mental health advice among US adolescents and young adults. JAMA Network Open, 8(11), e2542281. (Full text)