B

Berke

Director @ Health Progress Hub
230 karmaJoined

Comments
16

Thanks! I think I hold the "malaria vaccines could have been produced faster and could have reached more people by now" point fairly weakly. The stronger point I hold is "we can develop and produce more various critical medicines and vaccines, as well as make them cheaper, and we can do so without significant second-order effects (e.g. lower quality, vaccine hesitancy)". I think there are many cases where we can confidently say more, bigger, faster, and cheaper even when we factor in second-order effects. 

In most cases it's not silly people slowing things, but sometimes it is indeed just institutional inertia or lack of non-silly rules that slow things.

In terms of products, lead chelation, and antivenom are clear cases where "please just produce more and cheaper" mindset seems pretty warranted. Lead chelation may be less cost-effective than lead exposure prevention, but I don't see a reason for the broader global health ecosystem to prioritize it even if it's not cost-effective enough for EA standards, as the current state of affairs seem extremely far from optimal.

Hello Mo, "I do think there's something to the critique, but I'd like to understand it better." is actually the sentiment that led me to write the post. 

I believe the proponents of systematic change critique of EA usually undervalue tractability and overestimate the chance of success for radical interventions. I do disagree with most pro-systemic change critiques of EA.  I think when people talk about systemic change, they usually don't clearly define what it is (and isn't), as well as which systemic change interventions are above EAs' bar, and why that's the case. Usually, when people make the case for systemic change, it's quite frequently (i) anti-capitalist, radical policy advocacy, (ii) anti-authoritarianism interventions or (iii) a case for fundamental changes to the national/global systems, and a case against gradual, marginal improvements
 
I don't agree with the directions of these critiques, but I do directionally agree with what they criticize.

The somewhat classical EA response of "We support lead paint bans, anti-cruently laws as well as things like YIMBY advocacy" is somewhat insufficient. As within global health and development, delivery interventions are clearly majority/most of the portfolio. At the same time, policy interventions also tend to focus on fairly "apolitical" issues such as lead paint. There are examples, but they are obviously underrepresented in the GH portfolio, and I don't think there is a good articulation of why they are appropriately represented. 

The most concrete gap in EAs' portfolio where the "systemic change" critique is powerful, in my opinion is health systems strengthening broadly construed. 

For a movement that aims to tackle global inequalities and urgent global issues, there is a curious lack of interest, funding and talent working on improving health systems. For various questions and advocacy agendas such as "How can LMIC governments fund healthcare systems after the aid cuts?", "How to improve the procurement of/access to affordable generic medicines?", "How to expand PHC in countries with at least some financial/technical infrastructure" are all questions where the EA ecosystem unfortunately don't have much to offer (unless you are an EA funder looking to commission research). And contrary to anti-capitalist advocacy or radical political reform, these questions and agendas are somewhat tractable work on, and at least worth exploring in my opinion.
 
 

Thanks for writing this, strong agree! 

Quick point (apologies if you did flag these and I missed it): there was/is actually some EA work on vaccine acceleration. Coefficient/Open Phil helped fund the R21 Phase 3 trials and is now backing next-generation malaria candidates at Oxford.  1Day Sooner/1 Day Africa also does great work on vaccine acceleration/regulatory reform. Rethink Priorities also does some relevant work.

Still, the broader point holds! Abundance-oriented thinking on public and global health is quite undersupplied in general (and also within EA), and I wish more people were working on how to make medicines and vaccines cheaper/faster through market shaping, procurement reform, regulatory reform, or for-profit entrepreneurship. 

Though I think this is a downstream effect of EA community-builders deprioritizing global health in general, and the broader lack of financial/talent/connective infrastructure for exploratory global health and development work within EA (e.g. no career resources on global health R&D, no global health fellowship, very limited funding for seed-stage exploratory GH orgs)

I strongly agree! Improving the cost-effectiveness (and cost-efficiency) of non-EA resources seems underexplored in EA discussions. I'd argue this applies to talent, not just funding.

In mainstream fields like global development and climate change, there are many talented, impact-driven professionals who don't know EA or wouldn't join the EA community (perhaps disagreeing with cause-neutrality or the utilitarian foundations). Yet many of these professionals would be quite willing to put in a lot of effort and energy into high-impact projects if exposed to important agendas and projects they're well-positioned to tackle. There could be significant value in shaping agendas and channeling these professionals toward more impactful (not necessarily "most impactful" by EA standards) work within their existing domains.

I should note this point is less relevant/salient for AI Safety field-building, where there already seem to be more pathways for non-EA people and broader engagement beyond the EA-aligned community.

On an additional note: Rethink Priorities' A Model Estimating the Value of Research Influencing Funders report had a relevant point:

"Moving some funders from an overall lower cost effectiveness to a still relatively low or middling level of cost effectiveness can be highly competitive with, and, in some cases,  more effective than working with highly cost-effective funders."

Strong upvote! I want to say some stuff particularly within the context of global development:

The intersection of AI and global development seems surprisingly unsaturated within EA, or to be more specific, I think a surprisingly few number of EAs think about the following questions:

i) How to leverage AI for development (e.g. AI tools for education, healthcare)  
ii) What interventions and strategies should be prioritized within global health and development in the light of AI developments? (basically the question you ask)

There seems to be a lot of people thinking about the first question outside of EA, so maybe that explains this dynamic, but I have the "hunch" that the primary reason why people don't focus on the first question too much is people deferring too much and selection effects, rather lack of any high-impact interventions. If you care about TAI, you are very likely to work on AI alignment & governance, if you don't want to work on TAI-related things (due to risk-aversion or any other argument/value), you just don't update that much based on AI developments and forecasts. This may also have to do with EA's ambiguity-averse/risk-averse attitude towards GHD characterized by exploiting evidence-based, interventions rather than exploring new highly promising interventions. I think if a student/professional were to come to an EA community-builder and asked "How can I pursue a high-impact career in/upskill in global health R&D or AI-for-development", number of community-builders that can give a sufficiently helpful answer is likely very few to none, I also likely wouldn't be able to give a good answer and point to communities/resources outside of the EA community. 

(Maybe EAs in London or SF people discuss these, but I don't see any discussion of it online, neither do I see any spaces where people who could be discussing these can network/discuss together. If there is anyone who'd like to help create or run an online or in-person AI-for-development or global health R&D fellowship, feel free to shoot a message) 


 

For the issues you raised in the last section, you may find this paper by Mogensen & Macaskill valuable. From the abstract: "Given plausible assumptions about the long-run impact of our everyday actions, we show that standard non-consequentialist constraints on doing harm entail that we should try to do as little as possible in our lives."

Agree, besides being further away, this would most probably reduce the number of EAs from LMICs who go to  EA conferences. I'm from Turkey and the limited number of people from Turkey who have gone to an EAGx did so because there was travel funding(including my first two conferences) and I'm quite confident none of them would be able to go if there was no funding(because I personally know them). I was thinking that 6 people from our college group would come to the next EAGx in Europe, but if there is no travel funding no one besides me most probably  won't go(and I would be able to go because I'm on an EA fellowship!) 

Still, I'm not saying all EAs from LMICs should be reimbursed or it makes sense to fund people who wouldn't otherwise be able to come to conferences(if they didn't receive funding) but i) on the margin providing travels grants to people from countries with low EA presence may have higher bang for the buck ii)A very selective travel grants policy would have this consequence(effectively reducing a considerable number of EAs based in LMICs from participating in EAGs)

 EA career advice tailored for people in based in LMICs was urgently  needed, very glad to see this! 

People in countries with low-EA presence can be very well-positioned to have a lot of impact even in the very short-run, as the number of low hanging fruits(really neglected high-impact opportunities where even a single person can plausibly make a substantial difference) in most of the LMICs are considerably higher compared to Western European and American countries, this post will probably empower a lot of people have more impact, thank you for writing this great post!

De-emphasizing cause neutrality could(my guess is) probably would reduce the long-term impact of the movement substantially. Trying to answer the question "How do the most good", without attempting to be neutral between causes we are passionate about and causes we don't (intuitively) care that much about would bias us towards causes and paths that are interesting to us rather than particularly impactful causes. Personal fit and being passionate about what you do is absolutely important, but when we're trying to compare causes and comparing actions/careers in terms of impact(or ITN), our answer shouldn't be dependent on our personal interests and passions, but when we're taking action based on those answers then we should think about personal fit and passions, as these prevent us from being miserable while we're pursuing impact. And also, cause neutrality should nudge people against associating EA with a singular cause like AI Safety or global development or even 80k careers, I think extreme cause neutrality is a solution to the problem you describe, rather than being root of the problem.
De-emphasizing cause neutrality would increase the likelihood of EA becoming mainstream and popular, but it would also undermine our focus and emphasis on impartiality and good epistemics, which were/are vital factors why EA was able to identify so many high-impact problems and take action to tackle those problems effectively imho.

Load more