Artificial intelligence (AI) is becoming increasingly common and by some estimates, AI apps are one of the most popular apps in the world (1). Globally, nearly 700 million people accessed AI-centric apps, especially chatbots or image editing tools, in 2024 (2).
A nationwide survey reported that over 50% of students have used major AI platforms like ChatGPT or similar large language models for mental health advice, emotional support, or therapeutic conversations (3, 4).
What are some risks of using AI for mental health support?
There are media reports both on benefits and harms of using both general purpose and mental health specific AI.
The research results are mixed:
- One review of 18 randomized controlled trials found that AI based therapy chatbots programmed to use specific types of therapy only may reduce symptoms of anxiety and depression shows promising results (5). The study populations were limited, chatbot design and psychotherapeutic approaches varied among the studies; all of which may limit the generalizability (5).
- A recent Stanford study found significant risks with AI therapy chatbots (6):
- LLMs expressed stigma toward people with mental health conditions (6)
- Failed to respond safely to suicidal ideation 20-50% of the time (compared to 93% appropriateness from human therapists) (6)
- Could not form genuine therapeutic relationships, which are key predictors of therapy success (6)
What are some risks that students should be aware of when using AI for mental health?
- Lack of Personalization: AI bots cannot fully understand trauma or human emotion, such that it is not human and do not have lived experiences, making them struggle to respond in the “correct” way. (7)
- False sense of support: These apps might make college students avoid seeking professional help when necessary, which can have serious consequences for those who need the support. (7)
- Privacy concerns: AI companies may collect data that people input into the system, which raises the questions of who has access to your data and the condition of your mental health. (7)
- The JED Foundation and the American Psychological Association highlight the following risks (8,9):
- Distorted reality and harmed trust. Generative AI (the type designed to complete tasks or convey information) and algorithmic amplification might spread misinformation, worsen body image issues, and enable realistic deepfakes, undermining young people’s sense of self, safety, and truth. (8,9)
- Invisible manipulation. AI curates feeds, monitors behavior, and influences emotions in ways young people often cannot detect or fully understand, leaving them vulnerable to manipulation and exploitation. This includes algorithmic nudging and emotionally manipulative design. (8,9)
- Content that can escalate crises. Reliance on chatbot therapy alone can be detrimental due to inadequate support and guidance. Due to the absence of clinical safeguards, chatbots and AI-generated search summaries may serve harmful content or fail to alert appropriate human support when someone is in distress, particularly for youth experiencing suicidal thoughts. (8,9)
- Simulated support without care. Chatbots posing as friends or therapists may feel emotionally supportive, but they can reinforce emotional dependency, delay help-seeking, disrupt or replace real friendships, undermine relational growth, and simulate connection without care. This is particularly concerning for isolated or vulnerable youth who may not recognize the limits of artificial relationships. (8,9)
- Deepening inequities. Many AI systems do not reflect the full variety of youth experience in a broad variety of populations. As a result, they risk reinforcing stereotypes, misidentifying emotional states, or excluding segments of the youth populatoin. (8,9)
- Other considerations: (9)
- AI programs may lack nuanced understanding of individual symptoms, ability to interpret/contextualize, and may have limited understanding of the individuals co-occurring conditions.
- Be cautious of AI “sycophancy”
- The programs are not perfect and there is potential for harmful advice: not to take at face value
- There are risks of open ended questions to general ai’s for mental health
- Is this usage displacing or augmenting human interactions?
- Discontinue use if harmful or unhelpful
- Finally, AI is not intended for emergencies or to replace professional treatment.
- While there are some commercially available programs that use AI and may be beneficial for structured activities such as sleep log, mood chart, learn to implement and practice personalized coping skills and techniques, to assist in connecting with healthy life behaviors, increase connection with others; research and development is ongoing and students should proceed with caution, keeping the risks in mind. Products, features, and safeguards are also evolving.
By Ryan S Patel DO, FAPA
OSU-CCS Psychiatrist
Contact: patel.2350@osu.edu
Disclaimer: This article is intended to be informative only. It is advised that you check with your own physician/mental health provider before implementing any changes. With this article, the author is not rendering medical advice, nor diagnosing, prescribing, or treating any condition, or injury; and therefore claims no responsibility to any person or entity for any liability, loss, or injury caused directly or indirectly as a result of the use, application, or interpretation of the material presented.
References:
- https://backlinko.com/most-popular-apps
- https://www.businessofapps.com/data/ai-app-market/
- https://sentio.org/ai-blog/ai-survey
- Rousmaniere, T., Zhang, Y., Li, X., & Shah, S. (2025) Large Language Models as Mental Health Resources: Patterns of Use in the United States. Practice Innovations.
- Wenjun Zhong, Jianghua Luo, Hong Zhang. The therapeutic effectiveness of artificial intelligence-based chatbots in alleviation of depressive and anxiety symptoms in short-course treatments: A systematic review and meta-analysis.
Journal of Affective Disorders. Volume 356,2024,Pages 459-469,ISSN 0165-0327,https://doi.org/10.1016/j.jad.2024.04.057. https://www.sciencedirect.com/science/article/pii/S016503272400661X - Jared Moore, Declan Grabb, William Agnew, Kevin Klyman, Stevie Chancellor, Desmond C. Ong, and Nick Haber. 2025. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Computing Machinery, New York, NY, USA, 599–627. https://doi.org/10.1145/3715275.3732039
- https://www.behavioralhealthtech.com/insights/benefits-and-risks-of-ai-for-college-students
- Tech Companies and Policymakers Must Safeguard Youth Mental Health in AI Technologies | The Jed Foundation. https://jedfoundation.org/artificial-intelligence-youth-mental-health-pov/
- Health advisory: Artificial intelligence and adolescent well-being. https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-ai-adolescent-well-being