I

ishaan

845 karmaJoined Feb 2019

Comments
72

ie SoGive would thinks depression is worse than death. Maybe this isn't quite a "sanity check" but I doubt many people have that moral view.

I replied in the moral weights post w.r.t. "worse than death" thing. (I think that's a fundamentally fair, but fundamentally different point from what I meant re: sanity checks w.r.t not crossing hard lower bounds w.r.t. the empirical effects of cash on well being vs the empirical effect of mental health interventions on well being)

My response to this post overall is that I think some of what is going on here is that different people and different organizations mean very different things when we say "Depression". Since "depression" is not really a binary, the value of averting "1 case of severe depression" can change  a lot depending on how you define severity, in such a way that differences in reasonable definitions of "sufficiently bad depression" can plausibly differ by 1-3x when you break it down into "how many SD counts as curing depression" terms.

However, the in-progress nature of SoGives' mental health work makes pinning down what we do mean sort of tricky. What exactly did the participants in the SoGive Delphi Process mean when they said "severe depression"? How should I, as an analyst who isn't aiming to set the moral weights but is attempting to advise people using them, interpret that? These things are currently in flux, in the sense that I'm basically in the process of making various judgement calls about them right now, which I'll describe below.

You commented:


I'm not sure 2-5 SD-years is plausible for severe depression. 3 SDs would saturate the entire scale 0-24.

It's true that the PHQ-9 score of 27 points maxes out around 2-4sd. How many SD it is exactly depends on the spread of your population of course (for example if 1sd=6.1 points then the range of a 27 point scale spans 4.42sd ), and for some population spreads it would be 3sd.
 

ie SoGive would thinks depression is worse than death. Maybe this isn't quite a "sanity check" but I doubt many people have that moral view.

These two things are related actually! I think the trouble is that the word "severity depression" is ambiguous as to how bad it is, so different people can mean different things by it. 

One might argue that the following was an awkward workaround which should have been done differently, but basically, to make transparent my internal thought process here (In terms of what I thought after joining sogive, starting this analysis, and encountering these weights) was the following:

-> "hm, this implies we're willing to trade averting 25 years of depression against one (mostly neonatal) death. Is this unusual?" 

-> "Maybe we are thinking about the type of severe, suicidal depression that is an extremely net negative experience, a state which is worse than death." 

-> "Every questionnaire creator seems to have recommended cut-offs for gradients of depression such as "mild" and "moderate"  (e.g. the creators of the PHQ-9 scale are recommending 20 points as the cut-off for "severe" depression) but these aren't consistent between scales and are ultimately arbitrary choices."

-> "extrapolating linearly from the time-trade-off literature people seemed to think that a year of depression breaks even with dying a year earlier around 5.5sd. Maybe less if it's not linear."

-> "But maybe it should be more because what's really happening here is that we're seeing multiple patients improve by 0.5-0.8 sd. The people surveyed in that paper think that the difference between 2sd->3sd is bigger than 1sd->2sd.  People might disagree on the correct way to sum these up." 

-> concluding with me thinking that various reasonable people might set the standard for "averting severe depression" between 2-6 sd, depending on whether they wanted ordinary severity or worse than death severity

So, hopefully that answers your question as to why I wrote to you that  2-5sd is reasonable for severe depression. I'm going to try to justify this further in subsequent posts. Some additional thoughts that I had were:

-> I notice that this is still weighting depression more heavily than the people surveyed in the time-trade-off, but if we set it on the higher range of 3-6sd it still feels like a morally plausible view (especially considering that some people might have assigned lower moral weight to neonates). 

-> My role is to tell people what the effect is, not to tell them what moral weights to use.  However, I'm noticing that all the wiggle room to interpret what "severe" means is on me, and I notice that I keep wanting to nudge the SD-years I accept as higher in order to make the view match what I think is morally plausible.

-> I'll just provisionally use something between 3-5 sd-years for the purpose of completing analysis, because my main aim is to figure out what therapy does in terms of sd. 

-> But I should probably publish a tool that allows people to think about moral weights in terms of standard deviation, and maybe we can survey people for moral weights again in the future in a manner that lets them talk about standard deviations rather than whatever connotations they attached to "severe depression". Then we can figure out what people really think about various grades of depression and how much income and life they're willing to trade about it.

In fact the next thing I'm scheduled to publish is a write up that talks in detail about how to translate SD into something more morally intuitive. So hopefully that will help us make some progress on the moral weights issue.

So to summarize, I think (assuming your calculations w.r.t. everyone else's weights are correct) what's going on here is that it looks like SoGive is weighing depression 4x more than everyone, but those moral weights were set in the absence of a concrete recommendations, and in the end ...and arguably this is an artifact me choosing after the fact to set a really high SD threshold for "severity" as a reaction to the weights, and what really needs to happen is that we need to go through that process I described of polling people again in a way that breaks down "severity" differently... in the final analysis, once a concrete recommendation comes out, it probably won't be that different?  (Though you've added two items, sd<->daly/wellby and cash<->sd, on my list of things to check for robustness  and if it ends up being notable I'm definitely going to flag it, so thank you for that).  I do think that this story will ultimately end with some revisiting of moral weights, how they should be set, and what they mean, and how to communicate them.

(There's another point that came up in the other thread though, regarding "does it pass the sanity check w.r.t. cash transfer effects on well being", which this doesn't address. although it falls outside the scope of my current work I have been wanting to get a firmer sense of the empirical cash <-> wellby <-> sd depression correlations and apropos of your comments perhaps this should be made more explicit in moral weights agendas.)

To expand a little on "this seems implausible":  I feel like there is probably a mistake somewhere in the notion that anyone involves thinks that <doubling income as having 1.3 WELLBY and severe depression has having a 1.3 WELLBY effect.> 

 The mistake might be in your interpretation of HLI's document (it does look like the 1.3 figure is a small part of some more complicated calculation regarding the economic impacts of AMF and their effect on well being, rather than intended as a headline finding about the cash to well being conversion rate). Or it could be that HLI has an error or has inconsistencies between reports. Or it could be that it's not valid to apply that 1.3 number to "income doubling"  SoGive weights for some reason because it doesn't actually refer to the WELLBY value of doubling.  

I'm not sure exactly where the mistake is, so it's quite possible that you're right, or that we are both missing something about how the math behind this works which causes this to work out, but I'm suspicious because it doesn't really fit together with various other pieces of information that I know. For instance -  it doesn't really square with how HLI reported Psychotherapy is 9x GiveDirectly when the cost of treating one person with therapy is around $80, or how they estimated that it took $1000 worth of cash transfers to produce 0.92 SDs-years of subjective-well-being improvement ("totally curing just one case of severe depression for a year" should correspond to something more like 2-5 SD-years). 

I wish I could give you a clearer "ah, here is where i think the mistake is" or perhaps a "oh, you're right after all" but I too am finding the linked analysis a little hard to follow and am a bit short on time (ironically, because I'm trying to publish a different piece of Strongminds analysis before a deadline).  Maybe one of the things we can talk about once we schedule a call is how you calculated this and whether it works? Or maybe HLI will comment and clear things up regarding the 1.3 figure you pulled out and what it really means.

Good stuff. I haven't spent that much time looking at HLIs moral weights work but I think the answer is "Something is wrong with how you've constructed weights, HLI is in fact weighing mental health harder than SoGive". I think a complete answer to this question requires me checking up on your calculations carefully, but I haven't done so yet, so it's possible that this is right.

If if were true that HLI found anything on the order of roughly doubling someone's consumption  improved well being as much as averting 1 case of depression, that would be very important as it would mean that SoGive moral weights fail some basic sanity checks. It would imply that we should raise our moral weight on cash-doubling to at least match the cost of therapy even under a purely subjective-well-being oriented framework to weighting. (why not pay 200 to double income, if it's as good as averting depression and you would pay 200 to avert depression?) This seems implausible.

I haven't actually been directly researching the comparative moral weights aspect, personally - I've been focusing primarily on <what's the impact of therapy on depression in terms of effect size> rather than on the "what should the moral weights be" question (though I have put some attention to the "how to translate effect sizes into subjective intuitions" question, but that's not quite the same thing). That said when I have more time I will look more deeply into this and check if our moral weights are failing some sort of sanity check on this order, but, I don't think that they are.

Regarding the more general question of "where would we stand if we altered our moral weights to be something else", ask me again in a month or so when all the spreadsheets are finalized, moral weights should be relatively easy to adjust once the analysis is done. 

(as sanjay alludes to in the other thread, I do think all this is a somewhat separate discussion from the GWWC list - my main point with the GWWC list was that StrongMinds is not in the big picture actually super out of place with the others, in terms of how evidence-backed it is relative to the others, especially when you consider the big picture of the background academic literature about the intervention rather than their internal data. But I wanted to address the moral weights issue directly as it does seem like an important and separate point.)

I'm a researcher at SoGive conducting an independent evaluation of StrongMinds which will be published soon. I think the factual contents of your post here are correct. However, I suspect that after completing the research, I would be willing to defend the inclusion of StrongMinds on the GGWC list, and that the SoGive write-up will probably have a more optimistic tone than your post. Most of our credence comes from the wider academic literature on psychotherapy, rather than direct evidence from StrongMinds (which we agree suffers from problems, as  you have outlined). 

Regarding HLI's analysis, I think it's a bit confusing to talk about this without going into the details because there are both "estimating the impact"  and "reframing how we think about moral-weights" aspects to the research. Ascertaining what the cost and magnitude of therapy's effects are must be considered separately from the "therapy will score well when you use subjective-well-being as the standard by which therapy and cash transfers and malaria nets are graded" issue. As of now I do roughly think that HLI's numbers regarding what the costs and effect sizes of therapy are on patients are in the right ballpark. We are borrowing the same basic methodology for our own analysis. You mentioned being confused by the methodology -  there are a few points that still confuse me as well, but we'll soon be publishing a spreadsheet model with a step by step explainer on the aspects of the model that we are borrowing, which may help.  

If you ( @Simon_M or anyone else wishing to work at a similar level of analysis) is planning on diving into these topics in depth, I'd love to get in touch on the Forum and exchange notes.

Regarding the level of evidence: SoGive's analysis framework outlines a "gold standard" for high impact, with "silver" and "bronze" ratings assigned to charities with lower-but-still-impressive cost-effectiveness ratings. However, we also distinguish between "tentative" ratings and "firm" ratings, to acknowledge that some high impact opportunities are based on more speculative estimates which may be revised as more evidence comes in.  I don’t want to pre-empt our final conclusions on StrongMinds, but I wouldn’t be surprised if “Silver (rather than Gold)” and/or “Tentative (rather than Firm)” ended up featuring in our final rating. Such a conclusion still would be a positive one, on the basis of which donation and grant recommendations could be made. 

There is precedent for effective altruists recommending donations to charities for which the evidence is still more tentative. Consider that Givewell recommends "top charities", but also recommends less proven potentially cost-effective and scalable programs (formerly incubation grants). Identifying these opportunities allows the community to explore new interventions, and can unlock donations that counterfactually would not have been made, as different donors may make different subjective judgment calls about some interventions, or may be under constraints as to what they can donate to.

Having established that there are different criteria that one might look at in order to determine when an organization should be included in a list, and that more than one set of standards which may be applied, the question arises: What sort of standards does the GWWC top charities list follow, and is StrongMinds really out of place with the others? Speaking the following now personally, informally and not on behalf of any current or former employer: I would actually say that StrongMinds has much more evidence backing than many of the other charities on this list (such as THL, Faunalytics, GFI, WAI, which by their nature don't easily lend themselves to RCT data). Even if we restrict our scope to the arena of direct (excluding e.g. excluding pandemic research orgs) global health interventions, I wouldn't be surprised if bright and promising potential stars such as Suvita and LEEP are actually at a somewhat similar stage as StrongMinds - they are generally evidence-based enough to deserve their endorsement on this list, but I'm not sure they've been as thoroughly vetted by external evaluators the way more established organizations such as Malaria Consortium might be.  Because of all this, I don't think StrongMinds seems particularly out of place next to the other GWWC recommendations. (Bearing in mind again that I want to speak casually as an individual for this last paragraph, and I am not claiming special knowledge of all the orgs mentioned for the purposes of this statement).

Finally, it's great to see posts like this on the EA forum, thanks for writing it!

Cool project! I suggest that the shrimp heart should be a different color, as most shrimp usually are not pink and only turn pink after cooking (although there are some exceptions to this so maybe this is too nitpicky and it's fine?). I am also not sure whether or not a living shrimp typically would have a curled up pose. Alternatively if you'd rather not do a full image redesign, or if there is a concern that people will not realize it is a shrimp if it looks too different from what they're used to seeing, it might help to instead have go vegan! text or something to clarify that it isn't that the sticker bearer likes eating shrimp.

Answer by ishaanJun 16, 20216
0
0

I thought "EA hotel" was pretty great as a straightforward description, good substitutes might have a word for "ea" and a word for "hotel". So like:

Bentham's Base
Helpers' House

Swap with Lodge, Hollow, Den if alliteration is too cute
 e.g. "Bentham's House", "Bentham's Lodge" both sound pretty serious.

Or just forget precedent and brand something new e.g. Runway (or Runway Athena)

Some "just kidding" alliterative options that I couldn't resist:
Crypto crib, Prioritization Place, Utilitarian's Union, Consequentialist Club, Greg's iGloo

What would it take to get the information that people like you, MichaelA, and many others have, compile it into a continually maintained resource, and get it into the hands of the people who need it?

I guess the "easy" answer is "do a poll with select interviews" but otherwise I'm not sure. I guess it would depends on which specific types of information you mean? To some degree organizations will state what they want and need in outreach. If you're referring to advice like what I said re: "indicate that you know what EA is in your application", a compilation of advice posts like this one about getting a job in EA might help. Or you could try to research/interview to find more concrete aspects of what the "criteria +bar to clear on those criteria" is for different funders if you see a scenario where the answer isn't clearly legible. (If it's a bar at all. For some stuff it's probably a matter of networking and knowing the right person.)

Another general point on collecting advice is that I think it's easy to accidentally conflate "in EA" (or even "in the world") with "in the speaker's particular organization, in that particular year, within that specific cause area" when listening to advice…The same goes for what both you and I have said above. For example, my perspective on early-career is informed by my particular colleagues, while your impression that "funders have more money than they can spend" or the work being all within "a small movement" etc is not so applicable for someone who wants to work in global health. Getting into specifics is super important. 

Heh, I was wondering if I'd get called out on that. You're totally right, everything that happens in the world constitutes evidence of something! 

What I should have said is that humans are prone to fundamental attribution error and it is bad to privilege the hypothesis that it's evidence of real skill/experience/resume signalling/degree etc, because then you risk working on the wrong things. Rejections are evidence, but they’re mostly evidence of a low baseline acceptance rate, and only slightly  evidence of other things.

I can imagine someone concluding things like "I'd better get a PhD in the subject so I can signal as qualified and then try again" in a scenario where maybe the thing that would've shifted their chances is rewording a cover letter, spending a single day researching some examples of well-designed CEAs before the work task, or applying on a different year.

Another factor which may play a role in the seeming arbitrariness of it all, is that orgs are often looking for a very specific thing, or have specific values or ideas that they emphasize, or are sensitive to specific key-words, which aren't always obvious and legible from the outside - leading to communications gaps. To give the most extreme example I've encountered of this, sometimes people don't indicate that they know what EA is about in their initial application, perhaps not realizing that they're being considered alongside non-EA applicants or that it might matter. For specific orgs, communication gaps might get more specific. If you're super interested in joining an org, getting a bit of intel on this can really help (and is a lot easier than trying to get experience somewhere else before re-applying!).

Load more