Abstract
Background: Suicide risk assessment is a critical skill for mental health professionals (MHPs), yet traditional training in this area is often limited. This study examined the potential of generative artificial intelligence (GenAI)- based simulator to enhance self-efficacy in suicide risk assessment among MHPs.
Methods: A quasiexperimental mixed methods study was conducted. Participants interacted with an AI-based simulator (AIBS) that embodied the role of a patient seeking suicide risk assessment. Each participant conducted a real-time risk assessment interview with the virtual patient and received comprehensive feedback on their assessment approach and performance. Quantitative data were collected through pre- and postintervention questionnaires measuring suicide risk assessment self efficacy and willingness to treat suicidal patients (using 11-point Likert scales). Qualitative data were gathered through open-ended questions analyzing participants’ experiences, perceived benefits, and concerns regarding the AI simulator.
Results: Among the 43 participating MHPs, we found a significant increase in self efficacy scores from preintervention (mean = 6.0, SD = 2.4) to postintervention (mean = 6.4, SD = 2.1, P < .05). Willingness to treat patients presenting suicide risk increased slightly from (mean = 4.76, SD =2.64) to (mean = 5.00, SD = 2.50) but did not reach significance. Participants reported positive experiences with the simulator, with high likelihood to recommend to colleagues (mean = 7.63, SD =2.27). Qualitative feedback indicated that participants found the simulator engaging and valuable for professional development. However, participants raised concerns about overreliance on AI and the need for human supervision during training.
Conclusion: This preliminary study suggests that AIBSs show promise for improving MHPs’ self-efficacy in suicide risk assessment. However, further research with larger samples and control groups is needed to confirm these findings and address ethical considerations surrounding AI use in suicide risk assessment training. AI powered simulation tools may have potential to increase access to training in mental health, potentially contributing to global suicide prevention efforts. However, their implementation should be carefully considered to ensure they complement rather than replace human expertise.
J Clin Psychiatry 2025;86(3):24m15525
Author affiliations are listed at the end of this article.
Members Only Content
This full article is available exclusively to Professional tier members. Subscribe now to unlock the HTML version and gain unlimited access to our entire library plus all PDFs. If you’re already a subscriber, please log in below to continue reading.
References (43)
- Rogers JP, Chesney E, Oliver D, et al. Suicide, self-harm and thoughts of suicide or self-harm in infectious disease epidemics: a systematic review and meta analysis. Epidemiol Psychiatr Sci. 2021;30:e32. PubMed CrossRef
- Ryan EP, Oquendo MA. Suicide risk assessment and prevention: challenges and opportunities. Focus (Madison). 2020;18(2):88–99. PubMed CrossRef
- Fowler JC. Suicide risk assessment in clinical practice: pragmatic guidelines for imperfect assessments. Psychotherapy. 2012;49(1):81–90. PubMed CrossRef
- Posner K, Brown GK, Stanley B, et al. The Columbia-Suicide Severity Rating Scale: initial validity and internal consistency findings from three multisite studies with adolescents and adults. Am J Psychiatry. 2011;168(12):1266–1277. PubMed CrossRef
- Turecki G, Brent D, Turecki G, et al. Suicide and suicidal behaviour. Lancet. 2016;387(10024):1227–1239. PubMed
- Franklin JC, Ribeiro JD, Fox KR, et al. Risk factors for suicidal thoughts and behaviors: a meta-analysis of 50 years of research. Psychol Bull. 2017;143(2):187–232. PubMed CrossRef
- McHugh CM, Chun Lee RS, Hermens DF, et al. Impulsivity in the self-harm and suicidal behavior of young people: a systematic review and meta-analysis. J Psychiatr Res. 2019;116:51–60. PubMed CrossRef
- Levi-Belz Y, Barzilay S, Levy D, et al. To treat or not to treat: the effect of hypothetical patients’ suicidal severity on therapists’ willingness to treat. Arch Suicide Res. 2020;24(3):355–366. PubMed CrossRef
- Aharon ON, Aisenberg-Shafran D, Levi-Belz Y. The effect of suicide severity and patient’s age on mental health professionals’ willingness to treat: the moderating effect of ageism. Death Stud. 2024;48(7):652–662. PubMed CrossRef
- Wu T, He S, Liu J, et al. A brief overview of ChatGPT: the history, status quo and potential future development. IEEE/CAA J Automatica Sinica. 2023;10(5).123618.
- Biswas S. Role of ChatGPT in computer programming. Mesopotamian J Computer Sci. 2023. Published online. doi:10.58496/mjcsc/2023/002. CrossRef
- Haber Y, Levkovich I, Hadar-Shoval D, et al. The artificial third: a broad view of the effects of introducing generative artificial intelligence on psychotherapy. JMIR Ment Health. 2024;11:e54781. PubMed CrossRef
- Grossman MR, Grimm PW, Brown DG, et al. The gptjudge: justice in a generative AI World. ACM Comput Surv. 2023;55(12).
- Macey-Dare R. How ChatGPT and generative AI systems will revolutionize legal services and the legal profession. SSRN Electron J. 2023. Published online. doi:10. 2139/ssrn.4366749. CrossRef
- Elyoseph Z, Hadar-Shoval D, Asraf K, et al. ChatGPT outperforms humans in emotional awareness evaluations. Front Psychol. 2023;14:1199058. PubMed CrossRef
- Hadar-Shoval D, Elyoseph Z, Lvovsky M. The plasticity of ChatGPT’s mentalizing abilities: personalization for personality structures. Front Psychiatry. 2023;14:1234397. PubMed CrossRef
- Elyoseph Z, Levkovich I, Shinan-Altman S. Assessing prognosis in depression: comparing perspectives of AI models, mental health professionals and the general public. Fam Med Community Health. 2024;12(suppl 1):e002583. PubMed CrossRef
- Elyoseph Z, Hadar Shoval D, Levkovich I. Beyond personhood: ethical paradigms in the generative artificial intelligence era. Am J Bioeth. 2024;24(1):57–59. PubMed CrossRef
- Hadar-Shoval D, Asraf K, Mizrachi Y, et al. Assessing the alignment of large language models with human values for mental health integration: cross-sectional study using Schwartz’s Theory of Basic Values. JMIR Ment Health. 2024;11(1):e55988. PubMed CrossRef
- Elyoseph Z, Refoua E, Asraf K, et al. Capacity of generative AI to interpret human emotions from visual and textual data: pilot evaluation study. JMIR Ment Health. 2024;11(1):e54369. PubMed CrossRef
- Elyoseph Z, Gur T, Haber Y, et al. An ethical perspective on the democratization of mental health with generative AI. JMIR Ment Health. 2024;11:e58011. Accessed May 18, 2024. https://www.researchgate.net/publication/378737076_An_Ethical_Perspective_on_The_Democratization_of_Mental_Health_with_Generative_Artificial_Intelligence PubMed CrossRef
- Elyoseph Z, Levkovich I, Rabin E, et al. Applying Language models for suicide prevention: evaluating news article adherence to WHO reporting guidelines. npj Ment Health Res. 2024. doi:10.21203/RS.3.RS-4180591/V1. CrossRef
- Levkovich I, Elyoseph Z. Identifying depression and its determinants upon initiating treatment: ChatGPT versus primary care physicians. Fam Med Community Health. 2023;11(4):e002391. PubMed CrossRef
- Levkovich I, Elyoseph Z. Suicide risk assessments through the eyes of ChatGPT 3.5 versus ChatGPT-4: vignette study. JMIR Ment Health. 2023;10(1):e51232. PubMed CrossRef
- Elyoseph Z, Levkovich I. Comparing the perspectives of generative AI, mental health experts, and the general public on schizophrenia recovery: case vignette study. JMIR Ment Health. 2024;11(1):e53043. PubMed CrossRef
- Sorin V, Brin D, Barash Y, et al. Large Language models (LLMs) and empathy – a systematic review. medRxiv. 2023. Published online.
- Omar M, Levkovich I. Exploring the efficacy and potential of large language models for depression: a systematic review. medRxiv. Published online May 7. doi: 10.1101/2024.05.07.24306897. CrossRef
- Tal A, Elyoseph Z, Haber Y, et al. The artificial third: utilizing ChatGPT in mental health. Am J Bioeth. 2023;23(10):74–77. PubMed CrossRef
- Cohen IG. What should ChatGPT mean for bioethics?. Am J Bioeth. 2023;23(10):8–16. PubMed CrossRef
- Alabed A, Javornik A, Gregory-Smith D, et al. More than just a chat: a taxonomy of consumers’ relationships with conversational AI agents and their well-being implications. Eur J Mark. 2024;58(2). doi:10.1108/EJM 01-2023-0037. CrossRef
- Elyoseph Z, Levkovich I. Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment. Front Psychiatry. 2023;14:1213141. doi:10. 3389/fpsyt.2023.1213141. PubMed CrossRef
- Arendt F, Till B, Voracek M, et al. ChatGPT, artificial intelligence, and suicide prevention. Crisis. 2023;44(5):367–370. PubMed CrossRef
- Arendt F, Till B, Voracek M, et al. ChatGPT, artificial intelligence, and suicide prevention: a call for a targeted and concerted research effort. Crisis. 2023;44(5):367–370. PubMed CrossRef
- Ghanadian H, Nejadgholi I, Al Osman H. ChatGPT for suicide risk assessment on social media: quantitative evaluation of model performance, potentials and limitations. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics. Toronto, Canada. 2023. Association for Computational Linguistics. doi:10.18653/v1/2023.wassa-1.16. CrossRef
- Levkovich I, Shinan-Altman S, Elyoseph Z. Can Large Language Models be sensitive to Culture Suicide Risk Assessment? Published online. 2024. doi:10. 21203/RS.3.RS-4066705/V1 CrossRef
- Maurya RK. A qualitative content analysis of ChatGPT’s client simulation role-play for practising counselling skills. Couns Psychother Res. 2024;24(2):614–630. CrossRef
- Maurya RK. Using AI based chatbot ChatGPT for practicing counseling skills through role-play. J Creat Ment Health. 2024;19:513–536.
- Haber Y, Hadar Shoval D, Levkovich I, et al. The Externalization of Internal Experiences in psychotherapy through Generative Artiûcial Intelligence: A Theoretical, Clinical, and Ethical Analysis; 2024.
- Interian A, Chesin M, Kline A, et al. Use of the Columbia-Suicide Severity Rating Scale (C-ssrs) to classify suicidal behaviors. Arch Suicide Res. 2018;22(2):278–294. PubMed CrossRef
- Holmes W, Porayska-Pomsta K, Holstein K, et al. Ethics of AI in education: towards a community-wide framework. Int J Artif Intell Educ. 2022;32(3):504–526. CrossRef
- Bengtsson M. How to plan and perform a qualitative study using content analysis. NursingPlus Open. 2016;2:8–14. CrossRef
- McKoin Owens M, Zickafoose A, Wingenbach G, et al. Selected Texan K-12 educators’ perceptions of youth suicide prevention training. Int J Environ Res Public Health. 2022;19(19):12625. PubMed CrossRef
- Vallor S. The AI Mirror: Reclaiming our Humanity in an Age of Machine Thinking. Oxford University Press; 2022.