BE

Ben Esche

107 karmaJoined Apr 2021

Posts
1

Sorted by New

Comments
8

Edit: thinking about the numbers a bit: there are ~6500 suicide deaths in the UK per year. Samaritans has something like 150 people answering phones 24/7 (extremely rough). So if every one of those 6500 people calls (absurd) and if the service improved so much they all survived (also absurd) that's still only 0.04 lives per person day (taking a day as 8 hours). So I think you have to start there and maybe go down a few OOMs due to those absurdly optimistic assumptions.

Answer by Ben EscheFeb 05, 202424
4
0

I answered calls for Samaritans for about a year, and answered texts on Shout for about the same amount of time before that. From my own experience, I'd say 1 to 5 lives per day is extremely optimistic, for the following reasons:

  • The vast majority of callers are not planning to take their lives right at that moment / imminently. People call for all kinds of reasons - e.g. loneliness, bereavement, being in prison and that sucking, trying to stop self harming, etc. 
  • The majority of calls are from repeat callers (and a significant minority are misuse of the service). Only a few are calling for the first time. (Samaritans doesn't track people, but people often just say they call regularly. Shout did track people. I forget the exact % of conversations that were not first time users but it was definitely most.) And this is obvious right - if someone calls 20 times they generate 20 more calls than someone who calls once.
  • For people who really are at severe risk, the reduction in the probability of suicide from one call is pretty unclear, but is certainly much less than 100%.  Even for someone who eventually works through what they're dealing with, it will probably have taken many calls, use of other mental health services, reliance on friends etc., probably over months or years.

I'm not aware of any trials of this kind of intervention, but they could be done. E.g. introducing a new hotline service in a country that doesn't currently have one, but only for a randomly selected half of districts/counties/states, and then comparing the impact on suicide rates over time.

My own unscientific feeling from doing this was that I probably helped a lot of people feel better that day / deal with some kind of crisis, but probably directly prevented very few suicides, if any.

Thank you very much - I'm part way through Christian Tarsney's paper and definitely am finding it interesting. I'll also have a go at Hilary Greaves piece. Listening to her on 80,000 hours' podcast was one thing that contributed to asking this question. She seems (at least there) to accept EV as the obviously right decision criterion, but a podcast probably necessitates simplifying her views!

Thanks very much. I am going to spend some time thinking about the  von-Neumann-Mortgenstern theorem. Despite my huge in-built bias towards believing things labelled "von-Neumann", at an initial scan I found only one of the axioms (transitivity)  felt obviously "true" to me about things like "how good is the whole world?". They all seem true if actually playing games of chance for money of course, which seems to often be the model. But I intend to think about that harder.

On GiveWell, I think they're doing an excellent job of trying to answer these questions. I guess I tend to get a bit stuck at the value-judgement level (e.g. how to decide what fraction of a human life a chicken life is worth). But it doesn't matter much in practice because I can then fall back on a gut-level view and yet still choose a charity from their menu and be confident it'll be pretty damn good.

Hi Harrison. I think I agree strongly with (2) and (3) here. I'd argue Infinite expected values that depend on (very) large numbers of trials / bankrolls etc. can and should be ignored. With the Petersburg Paradox as state in the link you included, making any vaguely reasonable assumption about the wealth of the casino, or lifetime of the player, the expected value falls to something much less appealing!  This is kind of related to my "saving lives" example in my question - if you only get to play once, the expected value becomes basically irrelevant because the good outcome just actually doesn't happen. It only starts to be worthwhile when you get to play many times. And hey, maybe you do. If there are 10,000 EAs all doing totally (probabilistically) independent things that each have a 1 in a million chance of some huge payoff, we start to get into realms worth thinking about.

Hi Larks. Thanks for raising this way of re-framing the point. I think I still disagree, but it's helpful to see this way of looking at it which I really hadn't thought of. I still disagree because I am assuming I only get one chance at doing the action and personally I don't value a 1 in a million chance of being saved higher than zero. I think if I know I'm not going to be faced with the same choice many times, it is better to save 10 people, than to let everyone die and then go around telling people I chose the higher expected value!

Thank you - this is all very interesting. I won't try to reply to all of it, but just thought I would respond to agree on your last point. I think x-risk is worth caring about precisely because the probability seems to be in the "actually might happen" range. (I don't believe at all that anyone knows it's 1/6 vs. 1/10 or 1/2, but Toby Ord doesn't claim to either does he?) It's when you get to the "1 in a million but with a billion payoff" range I start to get skeptical, because then the thing in question actually just won't happen, barring many plays of the game. 

Dear All - just a note to say thank you for all the fantastic answers which I will dedicate some time to exploring soon. I posted this and then was offline for a day and am delighted at finding five really thoughtful answers on my return. Thank you all for taking the time to explain these points to me. Seems like this is a pretty awesome forum.