Hello everyone! I’m new to this forum and would love to contribute to the group, particularly around the intersection of AI and mental health.
Every year, the Norwegian Refugee Council (NRC) publishes a list identifying the world’s most neglected crises. Although the 2025 report has not yet been released, this series can still serve as a strong evidence base and an entry point for discussions around mental health needs among underserved and displaced populations. For reference, the 2024 report can be found here: https://www.nrc.no/feature/2025/the-worlds-most-neglected-displacement-crises-in-2024. I'd love
In addition, I previously conducted a small research project through the AI & Equality platform focusing on AI and misinformation (Here is the paper: https://www.academia.edu/129284014/Re_visions_of_Now_and_Future_III) , particularly examining how harmful AI responses may affect individuals with mental health support needs or suicidal ideation. If this is relevant and access to the texts is possible, we could analyze harmful ChatGPT responses and explore how these responses could be replaced with safer, more appropriate alternatives.
There are also well-documented limitations of AI in therapeutic contexts. For example, AI struggles with one of the most critical components of therapy: the use of silence. It also tends to reinforce confirmation bias and has difficulty producing nuanced, non-directive responses. Building on therapeutic literature around the use of confrontation in clinical settings, it may be worth exploring when and how confrontation can be used safely, and whether this concept can be meaningfully translated or taught to AI systems.
Hello everyone! I’m new to this forum and would love to contribute to the group, particularly around the intersection of AI and mental health.
Every year, the Norwegian Refugee Council (NRC) publishes a list identifying the world’s most neglected crises. Although the 2025 report has not yet been released, this series can still serve as a strong evidence base and an entry point for discussions around mental health needs among underserved and displaced populations. For reference, the 2024 report can be found here:
https://www.nrc.no/feature/2025/the-worlds-most-neglected-displacement-crises-in-2024. I'd love
In addition, I previously conducted a small research project through the AI & Equality platform focusing on AI and misinformation (Here is the paper: https://www.academia.edu/129284014/Re_visions_of_Now_and_Future_III) , particularly examining how harmful AI responses may affect individuals with mental health support needs or suicidal ideation. If this is relevant and access to the texts is possible, we could analyze harmful ChatGPT responses and explore how these responses could be replaced with safer, more appropriate alternatives.
There are also well-documented limitations of AI in therapeutic contexts. For example, AI struggles with one of the most critical components of therapy: the use of silence. It also tends to reinforce confirmation bias and has difficulty producing nuanced, non-directive responses. Building on therapeutic literature around the use of confrontation in clinical settings, it may be worth exploring when and how confrontation can be used safely, and whether this concept can be meaningfully translated or taught to AI systems.
I'm looking forward to meet you soon,