Hide table of contents

Content warning: extensive discussion of suicide.

Introduction

Worldwide, about 1 million people commit suicide every year, more than die of malaria. As far as I can tell from a cursory search, there haven't been too many efforts by effective altruists directed at this issue[1], probably because, at least in first-world countries, it already receives a lot of attention, making it less likely that there are unknown promising interventions. However, I recently realized that it's possible there are very inexpensive methods for automated online suicide prevention available. 

The purpose of this post is to present what such a system might look like and solicit criticism, as there's a good chance I'm missing some legal / technical / ethical reason why it's infeasible or a bad idea. If this idea is promising, I hope to get the idea out there among EAs so that someone with more technical chops and organizational experience can make it happen, although if no one is available (and the idea is sound) I plan to keep working on it myself.

How an automated suicide prevention system might work

An automated suicide prevention system would need to do three things: 1. find online content expressing suicidal intent. 2. acquire enough identifying information to make possible an intervention 3. carry out an intervention. I'll discuss each of these steps in turn.

To find posts expressing suicidal thoughts, one fairly simple method would be to access recent posts on a fixed list of social media sites and use an ML model to try and identify posts indicating risk for suicide. Some sites, like Twitter, have an API available for bots, while others do not. To use most existing ML methods, we would need a training set consisting of posts labeled for whether or not they expressed suicidal intent. This is definitely obtainable: part of the paper "Natural Language Processing of Social Media as Screening for Suicide Risk" describes the creation of a data set for a similar purpose by finding people who had self-reported the dates of past suicide attempts in the mental health database OurDataHelps.org and scraping their social media posts prior to the attempt.[2]

Next, what information can be acquired about a user varies a lot depending on the site. Some sites encourage users to use their real names. Others, like Twitter, have location-sharing features that some users can opt into; for instance, Twitter allows users to include GPS data with each tweet (moreover, this paper concludes that automated processing of Twitter users' profiles yields a city of residence in 17% of cases, which correlate well with GPS data when the latter are present). An automated search through a user's past posts may allow additional information to be acquired.  It's also possible that even if some sites do not make certain pieces of user information publicly available, they would be willing to supply them to a service like this.

Finally, we come to the intervention. There are many possible interventions, which vary in the types of user information required, efficacy, and potential for negative side effects, which are discussed below in the "Possible Pitfalls" section:

  • Sending a message to the individual who made the post: a message including a hotline number could be sent to the person. Some sites already do something like this: for instance, Quora includes a hotline number before answers to questions deemed related to suicide.
  • Sending messages to contacts of the person: Access to people's contacts is available on many platforms. A message could be sent to several contacts (perhaps prioritizing those with the same last name to reach family members) alerting them to the concerning message and urging them to reach out to the person or contact authorities if they deem it serious.
  • (somewhat less feasible) Alerting emergency responders: This requires knowledge of the person's address or at least city of residence, and some automated means of contacting authorities.
  • (somewhat less feasible) Alerting mental health providers to contact the person: This intervention only makes sense if we're searching for content that indicates potential for future suicidality rather than imminent risk. It would presumably require coordination with large telehealth providers.
  • (less feasible) Asking a hotline to call the person: This would require coordination with hotlines and access to the user's phone number. Many people don't answer calls from unknown numbers, so this would probably need to be paired with a message informing them that a hotline will call them. This might seem intrusive or creepy to people, and as far as I know there is no precedent for hotlines doing this.
  • (less feasible) Interfacing with an existing system:  Facebook has an existing system for sending emergency responders to help suicidal individuals, and perhaps they would be willing to extend it to cases where someone posts on another platform but there's enough information (such as their name, profile picture, or phone number) to link them to a Facebook account. However, it seems unlikely that they would want to handle the new influx of reports, since in their system a human vets each one.

Impact

To assess the impact of such a system, I think there are two main considerations: how well someone's likelihood of attempting suicide can be determined from their online content by an automated system, and how effective the possible interventions are. (Note: there's ton of existing literature on these questions, and what's below is only the start of a review; I'm mostly just trying to dump some relevant data here rather than make a tight argument. I plan look into the literature more and make a more thorough post on this if doing so appears useful.)

One paper related to the first question, bearing more on gauging long-term risk than predicting an imminent crisis, is the above-mentioned "Natural Language Processing of Social Media as Screening for Suicide Risk." The authors identified 547 individuals who attempted suicide in the past, some from the mental-health database OurDataHelps.org (now defunct) and others who made reference to past attempts on Twitter, matched them with demographically similar controls and trained a classifier to distinguish them based on their posts in the 6 months leading up to the attempt[3].  They conclude that one version of their model, if given the same amount of data for other users and asked to flag those likely to attempt suicide, would be able to successfully flag 24% of those users who would eventually attempt suicide and only have 67% as many false positives as true positives (the model can be tweaked to increase how many at-risk users are flagged at the expense of increasing the false alarm rate.)

There are numerous other academic papers that are relevant to the above questions, a few of which I'll summarize here briefly:

  • Belfort et al. surveyed 1350 adolescents admitted to a hospital for suicidality, finding that 36 had communicated their intent to commit suicide electronically (though not necessarily publicly).
  • Braithwaite et al. compared suicide risk surveys of Mechanical Turk participants with a risk rating derived from ML analysis of their tweets: their algorithm correctly classified 53% of the suicidal individuals and 97% of the non-suicidal ones.
  • O'Dea et al. used the Twitter API to search for tweets containing phrases potentially linked to suicide, hand-classified a sample of them by seriousness, and then trained an ML model on that data: they concluded that the ML model was able to "replicate the accuracy of the human coders," and they estimated that (as of 2015) there were about 32 tweets per day expressing a 'strongly concerning' level of suicidality (which is much less than I was expecting; I might be misunderstanding this).

Now, I'll briefly summarize some papers I found on how effective various interventions are at preventing suicide:

  • Pil et al. estimated that a suicide helpline in Belgium prevented about 36% of callers from attempting suicide and, on average, gained male callers 0.063 QALYs and female callers 0.019 QALYs.
  • Brown et al. concluded that CBT can reduce suicide risk by 50% when compared to standard care. If we assume that standard care is not actively harmful, this implies that at least some forms of intervention by medical practitioners are highly effective at preventing suicide.
  • Neimeyer and Pfeiffer reviewed the literature in 1994 and concluded that most existing studies detect no impact of crisis centers on suicide rates except among white females aged 24 or younger, for which they were highly effective.
  • Player et al. presented a qualitative analysis of interviews of 35 men deemed at risk for suicide, as well as 47 families of men who recently committed suicide. In general, despite some complaints, those interviewed affirmed the value of the medical system in helping them stay alive, also mentioning the usefulness of having trusted contacts to talk to.

Possible Pitfalls

There are several potential problems with this proposal, and I'll list the ones I've thought of here:

Potential for public criticism

This system would be likely to receive criticism for violating privacy. The most famous past analogue of the sort of system discussed here was the Samaritans Radar, a service by the Samaritans organization that users could add to Twitter and which scanned the text of all followed individuals' tweets and alerted the user if any seemed to express suicidal intent. It was shut down after only nine due to the volume of criticism directed at it (due to privacy concerns, worry about possible exploitation of the system by bad actors to identify vulnerable people, and the possibility it violated Britain's Data Protection Act).

Additionally, due to the Copenhagen interpretation of ethics, it seems likely that any automated online suicide prevention system might be blamed for the deaths it fails to prevent, regardless of its positive effects.

Causing harm to suicidal people / their contacts

If the system involves sending messages to contacts, it has the potential to place a lot of stress on contacts or, in the worst case, make them feel partially responsible for the user's death. In some (or perhaps many) cases, having one's friends / family become aware of one's mental health issues / suicidality could itself be psychologically damaging. If such an automated suicide prevention system became widely known to exist, people might avoid talking about their plans to commit suicide online and so fail to receive support they would have otherwise.

Overburdening existing systems

Depending on how selective a system is at flagging concerning messages, the possible intervention mentioned above of asking hotlines to call people might be infeasible due to the fact that hotlines wouldn't be able to handle the volume of requests the system would generate (it seems hotlines are already short on personnel). If (as seems likely) hotlines would be unable to expand to meet the new demand, and if making a post deemed concerning by the automated system is less predictive of suicidal intent than calling a hotline, such a system would just dilute the positive impact of hotlines. 

Additionally, if an automated suicide prevention system became well-known, troublemakers might intentionally provoke a response from it via insincere posts, thereby diverting resources.

Conclusion

Again, I welcome criticism. If it turns out the idea is not critically flawed, I'd also like to know if anyone else (especially anyone with ML experience) has an interest in working on this.

  1. ^

    With The Center for Pesticide Suicide Prevention one notable and very successful exception.

  2. ^

    It's unclear whether it would be most effective to target interventions at users expressing short-term suicidal intent or whose post history indicates they are at risk for attempting suicide in the future. The authors of "Natural Language Processing of Social Media as Screening for Suicide Risk" argued in favor of the latter, and designed their model for identifying concerning posts accordingly. I'll discuss both possibilities here.

  3. ^

    The researchers also tested excluding the data within the three months preceding the attempts and found that this did not substantially affect the capability of their classifier, suggesting that it wasn't relying on posts indicating imminent suicide attempts. I guess this also suggests either that such posts are uncommon, that they are usually preceded by enough other indications that they aren't uniquely informative to a classifier, or that they were present but the model was not sophisticated enough to take them into account.

14

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 3:50 PM

I am inclined to think that interventions that stem from people's statements on social media should have a light touch that provides the suicidal individual the means to reach out and get help.

A heavier hand, although perhaps more immediately effective in suicide prevention, may very quickly lose its value because it will likely deter people from reaching out and expressing their thoughts. If people are concerned that their communications can be used to confine or institutionalize them, they probably will just keep to themselves and their state of misery will worsen.

I am inclined to think similarly regarding confidentiality in the psychotherapist context even with a suicidal/homicidal patient. Attempts to use these contexts to disempower people, even when doing so might seem to make sense, will ultimately just dilute these tools and make us less effective at helping.

This is a good point. The Belfort et al. paper mentioned above implies that, among adolescents admitted to a certain psychiatric emergency room due to suicide concerns in 2012, at least 1% presented to the emergency room because, after communicating with a peer electronically, that peer "shared information with an adult or encouraged access to care," which suggests there's a fair bit of informal suicide prevention being done online that could potentially be disrupted by the knowledge of an automated service (though I guess texts also fall under "electronic means").

Thank you for writing this. I think the concept has potential. I don't think the content you've written here is sufficient to make the case (not that there should be an expectation that something is fully thought through before it appears on the Forum).

Context in case it's useful: I have volunteered with Samaritans for c20 years, and I founded the Talk It Over chatbot (forum post from a couple of years ago here)

I'm less excited by your work on how to find/identify suicidal people.

  • In our work at TIO, we've found that it's very easy to reach people with google ads
  • This is consistent with indications/evidence that people are much more willing to be open and honest in their google searches than they are in other contexts. So they might have high willingness to google "I hate myself", "I want to die", etc
  • Not to be too negative; the research you allude may also be helpful, but the strength of this proposal doesn't live or die on your ability to reach suicidal people

I'm more interested in interventions which actually effectively reduce suicide -- I'm not convinced we have any?

  • I wasn't familiar with Pil et al, thank you for sharing that study; interesting that it found a suicide helpline to be effective. I haven't read the study, was it a good quality study? Was it based on experimental evidence of the effectiveness of the intervention, or did it take effectiveness for granted? If experimental, how good was the study design?
  • I haven't done a careful review of suicide interventions, but my impression, based on casual conversations with academic suicide experts speaking at relevant conferences, is that:
  • ... suicide helplines don't seem to have a good evidence base supporting them
  • ... systemic policy interventions (putting a limit on the number of packets of paracetamol you can buy, catalytic converters, changes to fertilisers) do have a much better evidence base
  • ... I don't know about CBT, but I'm sympathetic to the possibility that it might be effective (at least some of the time)
  • TIO's impact model doesn't mention suicide prevention, precisely because I didn't have confidence that suicide helplines achieved this, and hence didn't have confidence that TIO achieves this (although I hope it does)

A full impact analysis needs more on how good it is to prevent a suicide

  • Some might argue that by taking actions to prevent suicides fails to consider the subject's wishes; their own perspective is that their life is net negative and they would be better off dead. I'm not saying that this perspective is correct, rather that there is a burden of proof on us to show that this doesn't necessarily hold
  • Even if it doesn't hold, a weaker version could be argued for: maybe those who are prevented from dying by suicide have a higher propensity to suffer from depression, and hence the in-expectation DALYs averted is lower than someone else of the same age

Liability risks may make suicide prevention apps neglected

  • It seems, contrary to my impression when reading your title, that you didn't envisage creating an app; I'll comment briefly on this nonetheless
  • As far as I'm aware, several mental health apps and non-tech-based service providers are very nervous about suicide, and are keen to direct users to a suicide helpline as soon as suicide is mentioned
  • There is some risk here: if a user used an app or service and their text was stored, and that user died, then the app provider could be accused of being liable for their death
  • It seems that the method for managing this risk is surprisingly simple: at Samaritans we simply explain to service users that we can't trace their call, and if they need help they should seek help themselves

Thanks for your comment. I feel very grateful to have received such a thorough reply (especially from someone with so much experience in the area).

To be honest, I haven't looked carefully at most of the papers I mentioned here concerning intervention effectiveness, including Pil et al. As I mentioned in the post, I still plan to do a more extensive literature review. It's interesting to hear your perception on how academic experts feel about intervention effectiveness; I tried a bit to find a recent thorough review article on this, but didn't have much luck.

Regarding the question of whether suicide prevention is net-positive in the first place, as I mentioned in another reply below, I felt pretty convinced of this after casually reading this blog post (whose main argument is that most suicides are the result of impulsive decisions or treatable conditions / temporary circumstances), but I think it would definitely be worthwhile to go through the argument more critically.

I hadn't considered liability risks, and, though I guess what I was describing is more like a bot than an app, it's possible they would still be relevant, so thanks for drawing my attention to that.

Epistemic status: I have no idea what I'm talking about and this is probably wrong and please totally correct me.

So,

I imagine that if people want to suicide, maybe we should let them?

[No idea what I'm talking about - please correct me]

[upvoted btw since I think this is anyway an interesting relevant discussion]

I once thought similarly.

Check out this 80k podcast for a good argument that suicidal people will live net positive lives if thwarted. It persuaded me.

https://podcasts.google.com/feed/aHR0cHM6Ly9wY3IuYXBwbGUuY29tL2lkMTI0NTAwMjk4OA/episode/dGFnOnNvdW5kY2xvdWQsMjAxMDp0cmFja3MvNDA5ODk2ODMx?ep=14

Thanks for bringing this up! I found this blog post by Scott Alexander pretty convincing as an argument that the vast majority of suicides are the result of temporary or treatable circumstances / psychological issues, but I haven't gone through the argument very critically.

There are multiple problems with this post. 
Suicide prevention falls into similar issues as drug/smoking prevention and pro-life issues. It aims at preventing one action at the end of a series of problems rather than addressing the events that lead to that. 

Sending information to contacts is a terrible idea. There would be no way for an automated system to identify the difference between supportive friends, recent exes, abusive or violent partners, incesty/abusive family, pimps, or even employers.  The percentage of people who are suicidal and have strong support system is pretty small. 

This post is also under the assumption that authority and healthcare providers are helpful. In the US there have been many issues of law enforcement being violent towards people with mental illness, and the majority of things happening today in the psychiatric hospital system is going to be seen as cruel and inhumane 50 years from now. There are many people who develop PTSD and related trauma diagnoses after going to a hospital. 

So how could these things better be addressed? 
When I was in high school, most suicidal teens felt that way because they had parents who encouraged them to commit suicide. This is a felony. Collecting evidence against abusers is extremely difficult to do safely, and much more difficult for minors who have significantly less power in the situation. Phones already record our conversations for the purpose of advertising- why not use it as an easy way for victims to collect evidence? This way when they decide they want to press charges,  they don't have to go through the laborous task of trying to prove events that happened  in the past. 

Honestly, any other suggestions beyond that are systemic. Efforts to prevent suicide aren't useful in failing economies, broken healthcare systems, corrupt legal systems, and non-existent domestic violence services, child protective services, and homelessness services. Any interaction with authorities- especially CPS- exponentially worsens the situation because it's a potentially traumatic interaction with zero follow through or support. The only time I have reported someone is when I believed the abuse they were going through was worse than the worst abuse they would get from the government. 

Most people who commit suicide don't do so as a mistake- they do so after repeated calls for help are ignored. Stories of people who "wouldn't have done it otherwise" are not representative of the majority of people who die this way. It's also not representative of "grieving family members" who participated in the abuse. 

Thanks for your reply, especially the part regarding sending messages to contacts; I hadn't appreciated on a deep level how bad that could be. Prior to writing this post, I hadn't realized  how varied people's perspectives are on the topic of suicide prevention, and your comment (and others) made me realize that, if I choose to keep looking into this, I need to devote more thought and research toward the big picture stuff and talk to / read things by people with more direct experience before (potentially) speculating on possible interventions.

Doing research is always a great way to go. Getting feedback is important, because that's when you'll hear things you might not see with the internet search algorythm matrix. For some people it can be difficult to unlearn after they've already been exposed to too much information.

Curated and popular this week
Relevant opportunities