Content warning: extensive discussion of suicide.
Introduction
Worldwide, about 1 million people commit suicide every year, more than die of malaria. As far as I can tell from a cursory search, there haven't been too many efforts by effective altruists directed at this issue[1], probably because, at least in first-world countries, it already receives a lot of attention, making it less likely that there are unknown promising interventions. However, I recently realized that it's possible there are very inexpensive methods for automated online suicide prevention available.
The purpose of this post is to present what such a system might look like and solicit criticism, as there's a good chance I'm missing some legal / technical / ethical reason why it's infeasible or a bad idea. If this idea is promising, I hope to get the idea out there among EAs so that someone with more technical chops and organizational experience can make it happen, although if no one is available (and the idea is sound) I plan to keep working on it myself.
How an automated suicide prevention system might work
An automated suicide prevention system would need to do three things: 1. find online content expressing suicidal intent. 2. acquire enough identifying information to make possible an intervention 3. carry out an intervention. I'll discuss each of these steps in turn.
To find posts expressing suicidal thoughts, one fairly simple method would be to access recent posts on a fixed list of social media sites and use an ML model to try and identify posts indicating risk for suicide. Some sites, like Twitter, have an API available for bots, while others do not. To use most existing ML methods, we would need a training set consisting of posts labeled for whether or not they expressed suicidal intent. This is definitely obtainable: part of the paper "Natural Language Processing of Social Media as Screening for Suicide Risk" describes the creation of a data set for a similar purpose by finding people who had self-reported the dates of past suicide attempts in the mental health database OurDataHelps.org and scraping their social media posts prior to the attempt.[2]
Next, what information can be acquired about a user varies a lot depending on the site. Some sites encourage users to use their real names. Others, like Twitter, have location-sharing features that some users can opt into; for instance, Twitter allows users to include GPS data with each tweet (moreover, this paper concludes that automated processing of Twitter users' profiles yields a city of residence in 17% of cases, which correlate well with GPS data when the latter are present). An automated search through a user's past posts may allow additional information to be acquired. It's also possible that even if some sites do not make certain pieces of user information publicly available, they would be willing to supply them to a service like this.
Finally, we come to the intervention. There are many possible interventions, which vary in the types of user information required, efficacy, and potential for negative side effects, which are discussed below in the "Possible Pitfalls" section:
- Sending a message to the individual who made the post: a message including a hotline number could be sent to the person. Some sites already do something like this: for instance, Quora includes a hotline number before answers to questions deemed related to suicide.
- Sending messages to contacts of the person: Access to people's contacts is available on many platforms. A message could be sent to several contacts (perhaps prioritizing those with the same last name to reach family members) alerting them to the concerning message and urging them to reach out to the person or contact authorities if they deem it serious.
- (somewhat less feasible) Alerting emergency responders: This requires knowledge of the person's address or at least city of residence, and some automated means of contacting authorities.
- (somewhat less feasible) Alerting mental health providers to contact the person: This intervention only makes sense if we're searching for content that indicates potential for future suicidality rather than imminent risk. It would presumably require coordination with large telehealth providers.
- (less feasible) Asking a hotline to call the person: This would require coordination with hotlines and access to the user's phone number. Many people don't answer calls from unknown numbers, so this would probably need to be paired with a message informing them that a hotline will call them. This might seem intrusive or creepy to people, and as far as I know there is no precedent for hotlines doing this.
- (less feasible) Interfacing with an existing system: Facebook has an existing system for sending emergency responders to help suicidal individuals, and perhaps they would be willing to extend it to cases where someone posts on another platform but there's enough information (such as their name, profile picture, or phone number) to link them to a Facebook account. However, it seems unlikely that they would want to handle the new influx of reports, since in their system a human vets each one.
Impact
To assess the impact of such a system, I think there are two main considerations: how well someone's likelihood of attempting suicide can be determined from their online content by an automated system, and how effective the possible interventions are. (Note: there's ton of existing literature on these questions, and what's below is only the start of a review; I'm mostly just trying to dump some relevant data here rather than make a tight argument. I plan look into the literature more and make a more thorough post on this if doing so appears useful.)
One paper related to the first question, bearing more on gauging long-term risk than predicting an imminent crisis, is the above-mentioned "Natural Language Processing of Social Media as Screening for Suicide Risk." The authors identified 547 individuals who attempted suicide in the past, some from the mental-health database OurDataHelps.org (now defunct) and others who made reference to past attempts on Twitter, matched them with demographically similar controls and trained a classifier to distinguish them based on their posts in the 6 months leading up to the attempt[3]. They conclude that one version of their model, if given the same amount of data for other users and asked to flag those likely to attempt suicide, would be able to successfully flag 24% of those users who would eventually attempt suicide and only have 67% as many false positives as true positives (the model can be tweaked to increase how many at-risk users are flagged at the expense of increasing the false alarm rate.)
There are numerous other academic papers that are relevant to the above questions, a few of which I'll summarize here briefly:
- Belfort et al. surveyed 1350 adolescents admitted to a hospital for suicidality, finding that 36 had communicated their intent to commit suicide electronically (though not necessarily publicly).
- Braithwaite et al. compared suicide risk surveys of Mechanical Turk participants with a risk rating derived from ML analysis of their tweets: their algorithm correctly classified 53% of the suicidal individuals and 97% of the non-suicidal ones.
- O'Dea et al. used the Twitter API to search for tweets containing phrases potentially linked to suicide, hand-classified a sample of them by seriousness, and then trained an ML model on that data: they concluded that the ML model was able to "replicate the accuracy of the human coders," and they estimated that (as of 2015) there were about 32 tweets per day expressing a 'strongly concerning' level of suicidality (which is much less than I was expecting; I might be misunderstanding this).
Now, I'll briefly summarize some papers I found on how effective various interventions are at preventing suicide:
- Pil et al. estimated that a suicide helpline in Belgium prevented about 36% of callers from attempting suicide and, on average, gained male callers 0.063 QALYs and female callers 0.019 QALYs.
- Brown et al. concluded that CBT can reduce suicide risk by 50% when compared to standard care. If we assume that standard care is not actively harmful, this implies that at least some forms of intervention by medical practitioners are highly effective at preventing suicide.
- Neimeyer and Pfeiffer reviewed the literature in 1994 and concluded that most existing studies detect no impact of crisis centers on suicide rates except among white females aged 24 or younger, for which they were highly effective.
- Player et al. presented a qualitative analysis of interviews of 35 men deemed at risk for suicide, as well as 47 families of men who recently committed suicide. In general, despite some complaints, those interviewed affirmed the value of the medical system in helping them stay alive, also mentioning the usefulness of having trusted contacts to talk to.
Possible Pitfalls
There are several potential problems with this proposal, and I'll list the ones I've thought of here:
Potential for public criticism
This system would be likely to receive criticism for violating privacy. The most famous past analogue of the sort of system discussed here was the Samaritans Radar, a service by the Samaritans organization that users could add to Twitter and which scanned the text of all followed individuals' tweets and alerted the user if any seemed to express suicidal intent. It was shut down after only nine due to the volume of criticism directed at it (due to privacy concerns, worry about possible exploitation of the system by bad actors to identify vulnerable people, and the possibility it violated Britain's Data Protection Act).
Additionally, due to the Copenhagen interpretation of ethics, it seems likely that any automated online suicide prevention system might be blamed for the deaths it fails to prevent, regardless of its positive effects.
Causing harm to suicidal people / their contacts
If the system involves sending messages to contacts, it has the potential to place a lot of stress on contacts or, in the worst case, make them feel partially responsible for the user's death. In some (or perhaps many) cases, having one's friends / family become aware of one's mental health issues / suicidality could itself be psychologically damaging. If such an automated suicide prevention system became widely known to exist, people might avoid talking about their plans to commit suicide online and so fail to receive support they would have otherwise.
Overburdening existing systems
Depending on how selective a system is at flagging concerning messages, the possible intervention mentioned above of asking hotlines to call people might be infeasible due to the fact that hotlines wouldn't be able to handle the volume of requests the system would generate (it seems hotlines are already short on personnel). If (as seems likely) hotlines would be unable to expand to meet the new demand, and if making a post deemed concerning by the automated system is less predictive of suicidal intent than calling a hotline, such a system would just dilute the positive impact of hotlines.
Additionally, if an automated suicide prevention system became well-known, troublemakers might intentionally provoke a response from it via insincere posts, thereby diverting resources.
Conclusion
Again, I welcome criticism. If it turns out the idea is not critically flawed, I'd also like to know if anyone else (especially anyone with ML experience) has an interest in working on this.
- ^
With The Center for Pesticide Suicide Prevention one notable and very successful exception.
- ^
It's unclear whether it would be most effective to target interventions at users expressing short-term suicidal intent or whose post history indicates they are at risk for attempting suicide in the future. The authors of "Natural Language Processing of Social Media as Screening for Suicide Risk" argued in favor of the latter, and designed their model for identifying concerning posts accordingly. I'll discuss both possibilities here.
- ^
The researchers also tested excluding the data within the three months preceding the attempts and found that this did not substantially affect the capability of their classifier, suggesting that it wasn't relying on posts indicating imminent suicide attempts. I guess this also suggests either that such posts are uncommon, that they are usually preceded by enough other indications that they aren't uniquely informative to a classifier, or that they were present but the model was not sophisticated enough to take them into account.
I am inclined to think that interventions that stem from people's statements on social media should have a light touch that provides the suicidal individual the means to reach out and get help.
A heavier hand, although perhaps more immediately effective in suicide prevention, may very quickly lose its value because it will likely deter people from reaching out and expressing their thoughts. If people are concerned that their communications can be used to confine or institutionalize them, they probably will just keep to themselves and their state of misery will worsen.
I am inclined to think similarly regarding confidentiality in the psychotherapist context even with a suicidal/homicidal patient. Attempts to use these contexts to disempower people, even when doing so might seem to make sense, will ultimately just dilute these tools and make us less effective at helping.
This is a good point. The Belfort et al. paper mentioned above implies that, among adolescents admitted to a certain psychiatric emergency room due to suicide concerns in 2012, at least 1% presented to the emergency room because, after communicating with a peer electronically, that peer "shared information with an adult or encouraged access to care," which suggests there's a fair bit of informal suicide prevention being done online that could potentially be disrupted by the knowledge of an automated service (though I guess texts also fall under "electronic means").
Thank you for writing this. I think the concept has potential. I don't think the content you've written here is sufficient to make the case (not that there should be an expectation that something is fully thought through before it appears on the Forum).
Context in case it's useful: I have volunteered with Samaritans for c20 years, and I founded the Talk It Over chatbot (forum post from a couple of years ago here)
I'm less excited by your work on how to find/identify suicidal people.
I'm more interested in interventions which actually effectively reduce suicide -- I'm not convinced we have any?
A full impact analysis needs more on how good it is to prevent a suicide
Liability risks may make suicide prevention apps neglected
Thanks for your comment. I feel very grateful to have received such a thorough reply (especially from someone with so much experience in the area).
To be honest, I haven't looked carefully at most of the papers I mentioned here concerning intervention effectiveness, including Pil et al. As I mentioned in the post, I still plan to do a more extensive literature review. It's interesting to hear your perception on how academic experts feel about intervention effectiveness; I tried a bit to find a recent thorough review article on this, but didn't have much luck.
Regarding the question of whether suicide prevention is net-positive in the first place, as I mentioned in another reply below, I felt pretty convinced of this after casually reading this blog post (whose main argument is that most suicides are the result of impulsive decisions or treatable conditions / temporary circumstances), but I think it would definitely be worthwhile to go through the argument more critically.
I hadn't considered liability risks, and, though I guess what I was describing is more like a bot than an app, it's possible they would still be relevant, so thanks for drawing my attention to that.
Epistemic status: I have no idea what I'm talking about and this is probably wrong and please totally correct me.
So,
I imagine that if people want to suicide, maybe we should let them?
[No idea what I'm talking about - please correct me]
[upvoted btw since I think this is anyway an interesting relevant discussion]
I once thought similarly.
Check out this 80k podcast for a good argument that suicidal people will live net positive lives if thwarted. It persuaded me.
https://podcasts.google.com/feed/aHR0cHM6Ly9wY3IuYXBwbGUuY29tL2lkMTI0NTAwMjk4OA/episode/dGFnOnNvdW5kY2xvdWQsMjAxMDp0cmFja3MvNDA5ODk2ODMx?ep=14
Thanks!
Thanks for bringing this up! I found this blog post by Scott Alexander pretty convincing as an argument that the vast majority of suicides are the result of temporary or treatable circumstances / psychological issues, but I haven't gone through the argument very critically.