A week for posting incomplete, scrappy, or otherwise draft-y posts. Read more

Hide table of contents

Context: AI’s potential impacts on mental health seems underdiscussed. The few things I’ve read tend to focus on the same issues, like ‘AI psychosis’. I have no background in this area and have only done a shallow dive, so this seemed like a good fit for a draft amnesty post.

Why AI could be great for mental health

  • Round-the-clock affordable therapy. LLMs can provide round-the-clock therapy to those who otherwise wouldn’t be able to afford it or would feel embarrassed about seeing a therapist. The quality of this therapy is likely to improve as the models improve. Soon this probably won’t be limited to chat interfaces, but delivered by human-identical avatars with advanced capabilities in emotional analysis based on facial expressions, vocal features, etc. In the meantime, AI could also just do mundane admin jobs for therapists, freeing them up to focus more fully on their clients.
  • AI-boosted meditation and mindfulness. AI could make meditation far more effective and engaging, making it much more likely that people would actually stick with it. This could be achieved through adaptive guided sessions that respond in real time to heart rate, breathing patterns, and other biometric feedback and continually adjust the difficulty, duration, background music, etc.
  • AI-boosted psychedelic therapy retreats. Many people report major mental health benefits from mushroom, ayahuasca, and other psychedelic retreats. AI could help monitor such retreats (e.g., keeping an eye on people’s behaviors and biometrics) and adjust the environment to ensure a positive experience.
  • Accelerated drug discovery and personalized medication. AI is accelerating the rate of drug development and enabling the creation of tailored drugs that are targeted to specific individuals based on their demographics, medical background, and other traits. As this continues, we might soon have access to highly affordable, effective, and side-effect-free medications specifically designed for our unique mental health issues.
  • Early detection and prediction of mental health crises. AI systems analysing things like speech patterns, typing behaviour, social media activity, and health data from wearables could identify the early signs of mental health crises long before a person or their clinician would notice, allowing for more preventive treatments.
  • Exposure therapy and skill-building. AI simulations could provide safe virtual environments for people to gradually confront their phobias, like public speaking, and develop ways to deal with them.
  • Real-time lifestyle monitoring and tailored advice. Wearables combined with AI could track users’ mental health and how this relates to their lifestyle, coming up with tailored guidance for sleep habits, nutrition, and other lifestyle changes that would boost mental health.
  • AI companions for the isolated and elderly. For people who are chronically isolated  (like elderly people living alone, people with disabilities that limit social contact, or people in remote areas) AI companions could supplement limited human interaction.
  • …lots of things that we couldn’t currently predict. If AI accelerates all kinds of scientific discovery, we could soon have a far better understanding of neurology and how to manipulate it directly, such as through brain-computer interfaces.

Why AI could be terrible for mental health

  • AI psychosis. There have been incidents of LLMs exacerbating users’ mental health problems, sometimes leading to murder or suicide. As AI continues to advance, such incidents of ‘AI psychosis’ might become more common (due to models becoming more persuasive and human-like in their interactions) or less common (due to companies working out how to flag warning cases early). 
  • Loss of purpose. It seems highly likely that AI will lead to major job losses and rising unemployment. More broadly, if AI can do most of what humans currently find meaningful (like creative work, caregiving, problem-solving, and teaching) the resulting loss of purpose could become a major driver of depression and existential crisis.
  • Chaos and uncertainty. It seems inevitable that major social disruption will increase pervasive feelings of confusion, anxiety, and existential dread.
  • Addiction and dependence. People may become increasingly addicted to AI tools, leaving people feeling a serious lack of human connection while reducing their ability to successfully seek it out.
  • Deepfakes and the loss of trust in reality. AI-generated footage could erode people’s trust in what’s real, contributing to widespread paranoia and social fragmentation.
  • Increasingly probable global crises. Nuclear war and engineered pandemics are famously not great for mental health.
  • Exploitation of mental health data. Governments, employers, and insurers could require the tracking and reporting of mental health data using the kinds of tools outlined in the ‘Why AI could be great for mental health’ section above, opening the door to many new opportunities for discrimination or targeted manipulation. Data leaks could open up opportunities for blackmail.
  • New forms of harassment and bullying. AI tools make it trivially easy to create targeted harassment campaigns (e.g., deepfake revenge porn, impersonation, mass-generated hate messages, coordinated trolling).
  • Outsourcing emotional processing. If people routinely turn to AI to process their emotions rather than sitting with discomfort, talking to friends, or developing their own emotional regulation skills, we might see a spread of emotional learned helplessness.
  • Concentration of power over mental health norms. A handful of tech companies could end up defining what counts as “healthy” thinking and “normal” behaviour for billions of people. This seems risky.
  • …lots of things that we couldn’t currently predict. In an increasingly weird world, there will be increasingly weird things that could make us stressed or miserable.

Other considerations

  • Mental health inequalities. AI mental health tools require smartphones, internet access, digital literacy, and often English proficiency. The people who need mental health support most — those in poverty, in low-income countries, elderly people, refugees — are often the least likely to have access. Without deliberate effort, AI could dramatically improve mental health care for the privileged while leaving everyone else behind, or even worse, diverting funding away from traditional services. 
  • Harmful vs helpful contentment. Greatly improving mental health could be risky. If dissatisfaction with the state of the world and sadness at others’ suffering is a major driver for us to help other humans and animals, perhaps eliminating those negative emotions could lead to complacency with the suffering of others. Or perhaps the opposite is true. Perhaps contentment will overall free people up to stop worrying about their own problems and take action to help others, as well as increasing the likelihood of collaboration between groups who are currently at loggerheads.

Further reading

The new ‘Effective Mental Health’ group includes an ‘AI Mental Health Initiative’ Working Group so if you’re interested, they’re probably useful people to follow. If you have any other suggestions for resources on this, I’d be interested to hear them. Thanks!

9

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities