bruce

1481Joined Oct 2021

Bio

Doctor from NZ, now doing Global Health & Development Research @ Rethink Priorities, but interested and curious about most EA topics.

Outside of RP work, I spend some time doing independent "grand futures"/ GPR research (Anders Sandberg/BERI) and very sporadic grantmaking (EAIF). Also looking to re-engage with UN processes for OCA/Summit of the Future.

Feel free to reach out if you think there's anything I can do to help you or your work, or if you have any Qs about Rethink Priorities! If you're a medical student / junior doctor reconsidering your clinical future, or if you're quite new to EA / feel uncertain about how you fit in the EA space, have an especially low bar for reaching out.

Outside of EA, I do a bit of end of life care research and climate change advocacy, and outside of work I enjoy some casual basketball, board games and good indie films. (Very) washed up classical violinist and oly-lifter.

All comments in personal capacity unless otherwise stated.

Comments
90

bruce1mo4134

Hey Ollie! Hope you're well. 

I think there’s a tricky trade-off between clarity and scope here....if we state guidelines that are very specific (e.g. a list of things you mustn’t do in specific contexts), we might fail to prevent harmful behaviour that isn’t on the list.

I want to gently push back on this a bit - I don't think this is necessarily a tradeoff. It's not clear to me that the guidelines have to be "all-inclusive or nothing". As an example, just because the guidelines say you can't use the swapcard app for dating purposes, it would be pretty unreasonable for people to interpret that as "oh, the guidelines don't say I can't use the swapcard app to scam people, that must mean this is endorsed by CEA".

And even if it's the case that the current guidelines don't explicitly comment against using swapcard to scam other attendees, and this contributes to some degree of "failing to prevent harmful behaviour that isn't on the list", that seems like a bad reason to choose to not state "don't use swapcard for sexual purposes".

RE: guidelines that include helpful examples, here's one that I found from 10secs of googling.

  • First it defines harrassment and sexual harrassment fairly broadly. Of course, what exactly counts as "reasonably be expected or be perceived to cause offence or humiliation" can differ between people, but this is a marginal improvement compared to current EAG guidelines that simply state "unwanted sexual attention or sexual harrassment".
  • It then gives a non-exhaustive list of fairly uncontroversial actions for its context - CEA can adopt its own standard! But I think it's fair to say that just because this list doesn't cover every possibility it doesn't necessarily mean the list is not worth including.
  • Notably, it also outlines a complaint process and details possible actions that may reasonably occur in response to a complaint. 

 As I responded to Julia's comment that you linked, I think these lists can be helpful because most reported cases are likely not from people intentionally wishing to cause harm, but differences in norms or communication or expectations around what might be considered harmful. Having a explicit list of actions helps get around these differences by being more precise about actions that are likely to be considered net negative in expectation.  If it's the case that there are a lot of examples that are in a grey area, then this may be an argument to exclude those examples, but it isn't really an argument against having a list that contains less ambiguous examples.

Ditto RE: different settings - this is an argument to have narrower scope for the guidelines, and to not write a single guideline that is intended to cover both the career fair and the afterparty, but not an argument against expressing what's unacceptable under one specific setting (especially when that setting is something as crucial as "EAG conference time")

Lastly, RE: "Responses should be shaped by the wishes of the person who experienced the problem" - of course it should be! But a list of possible actions that might be taken can be helpful without committing the team to a set response, but the inclusion of potential actions that can be taken is still reassuring and helpful for people to know what can be possible.

Again, this was just the first link I clicked, I don't think it's perfect, but I think there are multiple aspects of this that CEA could use to help with further iterations of its guidelines.

 

Another challenge is that CEA is the host of some events but not the host of some others associated with the conferences. We can’t force an afterparty host or a bar manager to agree to follow our guidelines though we sometimes collaborate on setting norms or encourage certain practices. 

I think it's fine to start from CEA's circle of influence and have good guidelines + norms for CEA events - if things go well this may incentivise other organisers to adopt these practices (or perhaps they won't adopt it, because the context is sufficiently different, which is fine too!) But even if other organisers don't adopt better guidelines, this doesn't seem like a particularly strong argument against adopting clearer guidelines for CEA events. The UNFCCC presumably aren't using "oh, we can't control what happens in UN Youth events globally, and we can't force them to agree to follow our guidelines" as an excuse to not have guidelines. But because they have their own guidelines, and many UN Youth events try to emulate what the UN event proper looks like, they will (at least try to) adopt a similar level of formality.

 

One last reason to err on the side of more precise guidelines echoes point 3 in what lilly shared above - if guidelines are vague and more open to interpretation by the Community Health team, this requires a higher level of trust in the CH team's track record and decision-making and management of CoIs, etc. To whatever extent recent events may reflect actual gaps in this process or even just a change in the perception here, erring on the side of clearer guidelines can help with accountability and trust building.

bruce3mo6843

Thanks for writing this post!

I feel a little bad linking to a comment I wrote, but the thread is relevant to this post, so I'm sharing in case it's useful for other readers, though there's definitely a decent amount of overlap here.

TL; DR

I personally default to being highly skeptical of any mental health intervention that claims to have ~95% success rate, as well as a PHQ-9 reduction of 12 points over 12 weeks, as this is is a clear outlier in treatments for depression. The effectiveness figures from StrongMinds are also based on studies that are non-randomised and poorly controlled. There are other questionable methodology issues, e.g. surrounding adjusting for social desirability bias. The topline figure of $170 per head for cost-effectiveness is also possibly an underestimate, because while ~48% of clients were treated through SM partners in 2021, and Q2 results (pg 2) suggest StrongMinds is on track for ~79% of clients treated through partners in 2022, the expenses and operating costs of partners responsible for these clients were not included in the methodology.

(This mainly came from a cursory review of StrongMinds documents, and not from examining HLI analyses, though I do think "we’re now in a position to confidently recommend StrongMinds as the most effective way we know of to help other people with your money" seems a little overconfident. This is also not a comment on the appropriateness of recommendations by GWWC / FP)

 

(commenting in personal capacity etc)

 

Edit - this is more related to discussions around HLI's work as opposed to the strength of evidence in support of StrongMinds, but including as this is ultimately relevant for the topline conclusion about StrongMinds:

bruce4mo3435

If I write a message like that because I find someone attractive (in some form), does that seem wrong to you? :) Genuinely curious about your reaction and am open to changing my mind, but this seems currently fine to me. I worry that if such a thing is entirely prohibited, so much value in new beautiful relationships is lost.

Yes, you're still contributing to harm (at least probabalistically) because the norm and expectation is currently that EAG / swapcard shouldn't be used as a speed-dating tool. So if you reaching out only because you find them attractive despite that, you are explicitly going against what other parties are expecting when engaging with swapcard, and they don't have a way to opt-out of receiving your norm-breaking message.

I'll also mention that you're arguing for the scenario of asking people for 1-1s at EAGS "only because you find them attractive". This means it would also allow for messages like, "Hey, I find you attractive and I'd love to meet." Would you also defend this? If not, what separates the two messages, and why did you choose the example you gave?

Sure, a new beautiful relationship is valuable, but how many non-work swapcard messages lead to a new beautiful relationship? Put yourself in the shoe of an undergrad who is attending EAG for the first time, wishing to learn more about a potential career in biosecurity or animal welfare or AI safety. Now imagine they receive a message from you, and 50 other people who also find them attractive. This doesn't seem like a good conference experience, nor a good introduction to the EA community. It also complicates the situation with people they want to reach out to as it increases uncertainty around whether people they want to meet with are responding in a purely professional sense, or whether they are just opportunistic. Then there's an additional layer of complexity when you add in things around power dynamics etc. Having shared professional standards and norms goes some way to reducing this uncertainty, but people need to actually follow them.

If you are worried that you'll lose the opportunity for beautiful relationships at EAGs, then there's nothing stopping you from attending something after the conference wraps up for the day, or even organising some kind of speed-dating thing yourself. But note how your organised speed-dating event would be something people choose to opt in to, unlike sending solicitation DMs via an app intended to be used for professional / networking purposes (or some other purpose explicit on their profile - i.e. if you're sending that DM to someone whose profile says "DM me if you're interested in dating me", then this doesn't apply. The appropriateness of that is a separate convo though).

Some questions for you:

  1. You say you're "open to changing your mind" - what would this look like? What kind of harm would need to be possible for you to believe that the expected benefit of a new beautiful relationship isn't worth it?
  2. What's the case that it's the role of CEA and EAG to facilitate new beautiful relationships? Do you apply this standard to other communities and conferences you attend?

 

I'll also note Kirsten's comment above, which already  talks about why it could be plausibly be bad "in general":
"The EAG team have repeatedly asked people not to use EAG or the Swapcard app for flirting. 1-1s at EAG are for networking, and if you're just asking to meet someone because you think they're attractive, there's a good chance you're wasting their time. It's also sexualizing someone who presumably doesn't want to be because they're at a work event."

And Lorenzo's comment above:
"Because EAG(x) conferences exist to enable people to do the most good, conference time is very scarce, misusing a 1-1 slot means someone is missing out on a potentially useful 1-1. Also, these kinds of interactions make it much harder for me to ask extremely talented and motivated people I know to participate in these events, and for me to participate personally. For people that really just want to do the most good, and are not looking for dates, this kind of interaction is very aversive."

bruce5mo2326

While I agree that both sides are valuable, I agree with the anon here - I don't think these tradeoffs are particularly relevant to a community health team investigating interpersonal harm cases with the goal of "reduc[ing] risk of harm to members of the community while being fair to people who are accused of wrongdoing".

One downside of having the bad-ness of say, sexual violence[1]be mitigated by their perceived impact,(how is the community health team actually measuring this? how good someone's forum posts are? or whether they work at an EA org? or whether they are "EA leadership"?) when considering what the appropriate action should be (if this is happening) is that it plausibly leads to different standards for bad behaviour. By the community health team's own standards, taking someone's potential impact into account as a mitigating factor seems like it could increase the risk of harm to members of the community (by not taking sufficient action with the justification of perceived impact), while being more unfair to people who are accused of wrongdoing. To be clear, I'm basing this off the forum post, not any non-public information

Additionally, a common theme about basically every sexual violence scandal that I've read about is that there were (often multiple) warnings beforehand that were not taken seriously.

If there is a major sexual violence scandal in EA in the future, it will be pretty damning if the warnings and concerns were clearly raised, but the community health team chose not to act because they decided it wasn't worth the tradeoff against the person/people's impact.

Another point is that people who are considered impactful are likely to be somewhat correlated with people who have gained respect and power in the EA space, have seniority or leadership roles etc. Given the role that abuse of power plays in sexual violence, we should be especially cautious of considerations that might indirectly favour those who have power.

More weakly, even if you hold the view that it is in fact the community health team's role to "take the talent bottleneck seriously; don’t hamper hiring / projects too much" when responding to say, a sexual violence allegation, it seems like it would be easy to overvalue the bad-ness of the immediate action against the person's impact, and undervalue the bad-ness of many more people opting to not get involved, or distance themselves from the EA movement because they perceive it to be an unsafe place for women, with unreliable ways of holding perpetrators accountable.

That being said, I think the community health team has an incredibly difficult job, and while they play an important role in mediating community norms and dynamics (and thus have corresponding amount of responsibility), it's always easier to make comments of a critical nature than to make the difficult decisions they have to make. I'm grateful they exist, and don't want my comment to come across like an attack of the community health team or its individuals!

(commenting in personal capacity etc)

  1. ^

    used as an umbrella term to include things like verbal harassment. See definition here.

bruce5mo7565

If this comment is more about "how could this have been foreseen", then this comment thread may be relevant. I should note that hindsight bias means that it's much easier to look back and assess problems as obvious and predictable ex post, when powerful investment firms and individuals who also had skin in the game also missed this. 

TL;DR: 
1) There were entries that were relevant (this one also touches on it briefly)
2) They were specifically mentioned
3) There were comments relevant to this. (notably one of these was apparently deleted because it received a lot of downvotes when initially posted)
4) There has been at least two other posts on the forum prior to the contest that engaged with this specifically

My tentative take is that these issues were in fact identified by various members of the community, but there isn't a good way of turning identified issues into constructive actions - the status quo is we just have to trust that organisations have good systems in place for this, and that EA leaders are sufficiently careful and willing to make changes or consider them seriously, such that all the community needs to do is "raise the issue". And I think looking at the systems within the relevant EA orgs or leadership is what investigations or accountability questions going forward should focus on - all individuals are fallible, and we should be looking at how we can build systems in place such that the community doesn't have to just trust that people who have power and who are steering the EA movement will get it right, and that there are ways for the community to hold them accountable to their ideals or stated goals if it appears to, or risks not playing out in practice.

i.e. if there are good processes and systems in place and documentation of these processes and decisions, it's more acceptable (because other organisations that probably have a very good due diligence process also missed it). But if there weren't good processes, or if these decisions weren't a careful + intentional decision, then that's much more concerning, especially in context of specific criticisms that have been raised,[1]  or previous precedent. For example, I'd be especially curious about the events surrounding Ben Delo,[2] and processes that were implemented in response. I'd be curious about whether there are people in EA orgs involved in steering who keep track of potential risks and early warning signs to the EA movement, in the same way the EA community advocates for in the case of pandemics, AI, or even general ways of finding opportunities for impact. For example, SBF, who is listed as a EtG success story on 80k hours, has publicly stated he's willing to go 5x over the Kelly bet, and described yield farming in a way that Matt Levine interpreted as a Ponzi. Again, I'm personally less interested in the object level decision (e.g. whether or not we agree with SBF's Kelly bet comments as serious, or whether Levine's interpretation as appropriate), but more about what the process was, how this was considered at the time with the information they had etc. I'd also be curious about the documentation of any SBF related concerns that were raised by the community, if any, and how these concerns were managed and considered (as opposed to critiquing the final outcome).

Outside of due diligence and ways to facilitate whistleblowers, decision-making processes around the steering of the EA movement is crucial as well. When decisions are made with benefits that clearly affect one part of the EA community while bringing risks which are pertinent to all,[3] we need to look at how these decisions were made and what was considered at the time of the decision, and going forward, how to either diversify those risks, or make decision-making more inclusive of a wider range stakeholders, keeping in mind the best interests of the EA movement as a whole.

(this is something I'm considering working on in a personal capacity along with the OP of this post, as well as some others - details to come, but feel free to DM me if you have any thoughts on this. It appears that CEA is also already considering this)

If this comment is about "are these red-teaming contests in fact valuable for the money and time put into it, if it misses problems like this"

I think my view here (speaking only for the red-teaming contest) is that even if this specific contest was framed in a way that it missed these classes of issues, the value of the very top submissions[4] may still have made the efforts worthwhile. The potential value of a different framing was mentioned by another panelist. If it's the case that red-teaming contests are systematically missing this class of issues regardless of framing, then I agree that would be pretty useful to know, but I don't have a good sense of how we would try to investigate this.

  

  1. ^

    This tweet seems to have aged particularly well. Despite supportive comments from high-profile EAs on the original forum post, the author seemed disappointed that nothing came of it in that direction. Again, without getting into the object level discussion of the claims of the original paper, it's still worth asking questions around the processes. If there was were actions planned, what did these look like? If not, was that because of a disagreement over the suggested changes, or the extent that it was an issue at all? How were these decisions made, and what was considered?

  2. ^

    Apparently a previous EA-aligned billionaire ?donor who got rich by starting a crypto trading firm, who pleaded guilty to violating the bank secrecy act

  3. ^

    Even before this, I had heard from a primary source in a major mainstream global health organisation that there were staff who wanted to distance themselves from EA because of misunderstandings around longtermism.

  4. ^
bruce6mo464

As requested, here are some submissions that I think are worth highlighting, or considered awarding but ultimately did not make the final cut. (This list is non-exhaustive, and should be taken more lightly than the Honorable mentions, because by definition these posts are less strongly endorsed  by those who judged it. Also commenting in personal capacity, not on behalf of other panelists, etc):

Bad Omens in Current Community Building
I think this was a good-faith description of some potential / existing issues that are important for community builders and the EA community, written by someone who "did not become an EA" but chose to go to the effort of providing feedback with the intention of benefitting the EA community. While these problems are difficult to quantify, they seem important if true, and pretty plausible based on my personal priors/limited experience. At the very least, this starts important conversations about how to approach community building that I hope will lead to positive changes, and a community that continues to strongly value truth-seeking and epistemic humility, which is personally one of the benefits I've valued most from engaging in the EA community.

Seven Questions for Existential Risk Studies
It's possible that the length and academic tone of this piece detracts from the reach it could have, and it (perhaps aptly) leaves me with more questions than answers, but I think the questions are important to reckon with, and this piece covers a lot of (important) ground. To quote a fellow (more eloquent) panelist, whose views I endorse: "Clearly written in good faith, and consistently even-handed and fair - almost to a fault. Very good analysis of epistemic dynamics in EA." On the other hand, this is likely less useful to those who are already very familiar with the ERS space.

Most problems fall within a 100x tractability range (under certain assumptions)
I was skeptical when I read this headline, and while I'm not yet convinced that 100x tractability range should be used as a general heuristic when thinking about tractability, I certainly updated in this direction, and I think this is a valuable post that may help guide cause prioritisation efforts.

The Effective Altruism movement is not above conflicts of interest
I was unsure about including this post, but I think this post highlights an important risk of the EA community receiving a significant share of its funding from a few sources, both for internal community epistemics/culture considerations as well as for external-facing and movement-building considerations. I don't agree with all of the object-level claims, but I think these issues are important to highlight and plausibly relevant outside of the specific case of SBF / crypto. That it wasn't already on the forum (afaict) also contributed to its inclusion here.


I'll also highlight one post that was awarded a prize, but I thought was particularly valuable:

Red Teaming CEA’s Community Building Work
I think this is particularly valuable because of the unique and difficult-to-replace position that CEA holds in the EA community, and as Max acknowledges, it benefits the EA community for important public organisations to be held accountable (and to a standard that is appropriate for their role and potential influence). Thus, even if listed problems aren't all fully on the mark, or are less relevant today than when the mistakes happened, a thorough analysis of these mistakes and an attempt at providing reasonable suggestions at least provides a baseline to which CEA can be held accountable for similar future mistakes, or help with assessing trends and patterns over time. I would personally be happy to see something like this on at least a semi-regular basis (though am unsure about exactly what time-frame would be most appropriate). On the other hand, it's important to acknowledge that this analysis is possible in large part because of CEA's commitment to transparency.

bruce1d83

Some very quick thoughts from EY's TIME piece from the perspective of someone ~outside of the AI safety work. I have no technical background and don't follow the field closely, so likely to be missing some context and nuance; happy to hear pushback!

Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

 

  • My immediate reaction when reading this was something like "wow, is this representative of AI safety folks? Are they willing to go to any lengths to stop AI development?". I've heard anecdotes of people outside of all this stuff saying this piece reads like a terrorist organisation, for example, which I think is a stronger term than I'd describe, but I think suggestions like this does unfortunately play into potential comparisons to ecofascists.
  • As someone seen publicly to be a thought leader and widely regarded as a founder of the field, there are some risks to this kind of messaging. It's hard to evaluate how this trades off, but I definitely know communities and groups that would be pretty put off by this, and it's unclear how much value the sentences around willingness to escalate nuclear war are actually adding.
    • It's an empirical Q about how to tradeoff between risks from nuclear war and risks from AI, but the claim of "preventing AI extinction is a priority above a nuclear exchange" is trivially true; the reverse is also true: "preventing extinction from nuclear war is a priority above preventing AI training runs". Given the difficulty of illustrating and defending a position that the risks of AI training runs is substantially higher than that of a nuclear exchange to the general public, I would have erred on the side of caution when saying things that are as politically charged as advocating for nuclear escalation (or at least something can be interpreted as such).
    • I wonder which superpower EY trusts to properly identify a hypothetical "rogue datacentre" that's worthy of a military strike for the good of humanity, or whether this will just end up with parallels to other failed excursions abroad 'for the greater good' or to advance individual national interests.
  • If nuclear weapons are a reasonable comparison, we might expect limitations to end up with a few competing global powers to have access to AI developments, and countries that do not. It seems plausible that criticism around these treaties being used to maintain the status quo in the nuclear nonproliferation / disarmament debate may be applicable here too.
  • Unlike nuclear weapons (though nuclear power may weaken this somewhat), developments in AI has the potential to help immensely with development and economic growth.
  • Thus the conversation may eventually bump something that looks like:
    • Richer countries / first movers that have obtained significant benefits of AI take steps to prevent other countries from catching up.[1]
    • Rich countries using the excuse of preventing AI extinction as a guise to further national interests
    • Development opportunities from AI for LMICs are similarly hindered, or only allowed in a way that is approved by the first movers in AI.
  • Given the above, and that conversations around and tangential to AI risk already receive some pushback from the Global South community for distracting and taking resources away from existing commitments to UN Development Goals, my sense is that folks working in AI governance / policy would likely strongly benefit from scoping out how these developments are affecting Global South stakeholders, and how to get their buy-in for such measures.

    (disclaimer: one thing this gestures at is something like - "global health / development efforts can be instrumentally useful towards achieving longtermist goals"[2], which is something I'm clearly interested in as someone working in global health. While it seems rather unlikely that doing so is the best way of achieving longtermist goals on the margin[3], it doesn't exclude some aspect of this in being part of a necessary condition for important wins like an international treaty, if that's what is currently being advocated for. It is also worth mentioning because I think this is likely to be a gap / weakness in existing EA approaches).
  1. ^

    this is applicable to a weaker extent even in the event that international agreements on indefinite moratorium on new large training runs passes, if you see AI as a potential equalising force or if you think first movers might be worried about this

  2. ^
  3. ^
bruce2d168
  • In our new report, The Elephant in the Bednet, we show that the relative value of life-extending and life-improving interventions depends very heavily on the philosophical assumptions you make. This issue is usually glossed over and there is no simple answer. 
  • We conclude that the Against Malaria Foundation is less cost-effective than StrongMinds under almost all assumptions. We expect this conclusion will similarly apply to the other life-extending charities recommended by GiveWell.

In suggesting James quote these together, it sounds like you're saying something like "this is a clear caveat to the strength of recommendation behind StrongMinds, HLI doesn't recommend StrongMinds as strongly as the individual bullet implies, it's misleading for you to not include this".

But in other places HLI's communication around this takes on a framing of something closer to "The cost effectiveness of AMF, (but not StrongMinds) varies greatly under these assumptions. But the vast majority of this large range falls below the cost effectiveness of StrongMinds". (extracted quotes in footnote)[1]

As a result of this framing, despite the caveat that HLI "[does] not advocate for any particular view", I think it's reasonable to interpret this as being strongly supportive of StrongMinds, which can be true even if HLI does not have a formed view on the exact philosophical view to take.[2]

If you did mean the former (that the bullet about philosophical assumptions is primarily included as a caveat to the strength of recommendation behind StrongMinds), then there is probably some tension here between (emphasis added):

-"the relative value of life-extending and life-improving interventions depends very heavily on the philosophical assumptions you make...there is no simple answer", and

-"We conclude StrongMinds > AMF under almost all assumptions"

 

Additionally I think some weak evidence to suggest that HLI is not as well-caveated as it could be is that many people (mistakenly) viewed HLI as an advocacy organisation for mental health interventions. I do think this is a reasonable outside interpretation based on HLI's communications, even though this is not HLI's stated intent. For example, I don't think it would be unreasonable for an outsider to read your current pinned thread and come away with conclusions like:

  • "StrongMinds is the best place to donate",
  • "StrongMinds is better than AMF",
  • "Mental health is a very good place to donate if you want to do the most good",
  • "Happiness is what ultimately matters for wellbeing and what should be measured".

If these are not what you want people to take away, then I think pointing to this bullet point caveat doesn't really meaningfully address this concern - the response kind of feels something like "you should have read the fine print". While I don't think it's not necessary for HLI to take a stance on specific philosophical views, I do think it becomes an issue if people are (mis)interpreting HLI's stance based on its published statements.

 

(commenting in personal capacity etc)

  1. ^

    -We show how much cost-effectiveness changes by shifting from one extreme of (reasonable) opinion to the other. At one end, AMF is 1.3x better than StrongMinds. At the other, StrongMinds is 12x better than AMF. 

    -StrongMinds and GiveDirectly are represented with flat, dashed lines because their cost-effectiveness does not change under the different assumptions. 

    -As you can see, AMF’s cost-effectiveness changes a lot. It is only more cost-effective than StrongMinds if you adopt deprivationism and place the neutral point below 1.

  2. ^

    As you've acknowledged, comments like "We’re now in a position to confidently recommend StrongMinds as the most effective way we know of to help other people with your money." perhaps add to the confusion.

bruce7d50

That makes sense, thanks for clarifying!

If I understand correctly, the updated figures should then be:

For 1 person being treated by StrongMinds (excluding all household spillover effects) to be worth the WELLBYs gained for a year of life[1] with HLI's methodology, the neutral point needs to be at least 4.95-3.77 = 1.18.

If we include spillover effects of StrongMinds (and use the updated / lower figures), then the benefit of 1 person going through StrongMinds is 10.7 WELLBYs.[2] Under HLI's estimates, this is equivalent to more than two years of wellbeing benefits from the average life, even if we set the neutral point at zero. Using your personal neutral point of 2 would suggest the intervention for 1 person including spillovers is equivalent to >3.5 years of wellbeing benefits. Is this correct or am I missing something here?

1.18 as the neutral point seems pretty reasonable, though the idea that 12 hours of therapy for an individual is worth the wellbeing benefits of 1 year of an average life when only considering impacts to them, and anywhere between 2~3.5 years of life when including spillovers does seem rather unintuitive to me, despite my view that we should probably do more work on subjective wellbeing measures on the margin. I'm not sure if this means:

  1. WELLBYs as a measure can't capturing what I care about in a year of healthy life, so we should not use solely WELLBYs when measuring wellbeing;
  2. HLI isn't applying WELLBYs in a way that captures the benefits of a healthy life;
  3. The existing way of estimating 1 year of life via WELLBYs is wrong in some other way (e.g. the 4.95 assumption is wrong, the 0-10 scale is wrong, the ~1.18 neutral point is wrong);
  4. HLI have overestimated the benefits of StrongMinds;
  5. I have a very poorly calibrated view of how good / bad 12 hours of therapy / a year of life is worth, though this seems less likely.

 

Would be interested in your thoughts on this / let me know if I've misinterpreted anything!

  1. ^

    More precisely, the average wellbeing benefits from 1 year of life from an adult in 6 African countries

  2. ^

    3.77*(1+0.38*4.85

bruce7d57

Thanks Joel.

this comparison, as it stands, doesn't immediately strike me as absurd. Grief has an odd counterfactual. We can only extend lives. People who're saved will still die and the people who love them will still grieve. The question is how much worse the total grief is for a very young child (the typical beneficiary of e.g., AMF) than the grief for the adolescent, or a young adult, or an adult, or elder they'd become

My intuition, which is shared by many, is that the badness of a child's death is not merely due to the grief of those around them. So presumably the question should not be comparing just the counterfactual grief of losing a very young child VS an [older adult], but also "lost wellbeing" from living a net-positive-wellbeing life in expectation?

I also just saw that Alex claims HLI "estimates that StrongMinds causes a gain of 13 WELLBYs". Is this for 1 person going through StrongMinds (i.e. ~12 hours of group therapy), or something else? Where does the 13 WELLBYs come from?

I ask because if we are using HLI's estimates of WELLBYs per death averted, and use your preferred estimate for the neutral point, then 13 / (4.95-2) is >4 years of life. Even if we put the neutral point at zero, this suggests 13 WELLBYs is worth >2.5 years of life.[1]

I think I'm misunderstanding something here, because GiveWell claims "HLI’s estimates imply that receiving IPT-G is roughly 40% as valuable as an additional year of life per year of benefit or 80% of the value of an additional year of life total."

Can you help me disambiguate this? Apologies for the confusion.

  1. ^

    13 / 4.95

Load more