Former AI safety research engineer, now AI governance researcher at OpenAI. Blog: thinkingcomplete.blogspot.com
@Linch, see the article I linked above, which identifies a bunch of specific bottlenecks where lobbying and/or targeted funding could have been really useful. I didn't know about these when I wrote my comment above, but I claim prediction points for having a high-level heuristic that led to the right conclusion anyway.
The article I linked above has changed my mind back again. Apparently the RTS,S vaccine has been in clinical trials since 1997. So the failure here wasn't just an abstract lack of belief in technology: the technology literally already existed the whole time that the EA movement (or anyone who's been in this space for less than two decades) has been thinking about it.
An article on why we didn't get a vaccine sooner: https://worksinprogress.co/issue/why-we-didnt-get-a-malaria-vaccine-sooner
This seems like significant evidence for the tractability of speeding things up. E.g. a single (unjustified) decision by the WHO in 2015 delayed the vaccine by almost a decade, four years of which were spent in fundraising. It seems very plausible that even 2015 EA could have sped things up by multiple years in expectation either lobbying against the original decision, or funding the follow-up trial.
This is a good point. The two other examples which seem salient to me:
Ah, I see. I think the two arguments I'd give here:
Hmm, your comment doesn't really resonate with me. I don't think it's really about being monomaniacal. I think the (in hindsight) correct thought process here would be something like:
"Over the next 20 or 50 years, it's very likely that the biggest lever in the space of malaria will be some kind of technological breakthrough. Therefore we should prioritize investigating the hypothesis that there's some way of speeding up this biggest lever."
I don't think you need this "move heaven and earth" philosophy to do that reasoning; I don't think you need to focus on EA growth much more than we did. The mental step could be as simple as "Huh, bednets seem kinda incremental. Is there anything that's much more ambitious?"
(To be clear I think this is a really hard mental step, but one that I would expect from an explicitly highly-scope-sensitive movement like EA.)
Makes sense, though I think that global development was enough of a focus of early EA that this type of reasoning should have been done anyway.
I’m more sympathetic about it not being done after, say, 2017.
A different BOTEC: 500k deaths per year, at $5000 per death prevented by bednets, we’d have to get a year of vaccine speedup for $2.5 billion to match bednets.
I agree that $2.5 billion to speed up development of vaccines by a year is tricky. But I expect that $2.5 billion, or $250 million, or perhaps even $25 million to speed up deployment of vaccines by a year is pretty plausible. I don’t know the details but apparently a vaccine was approved in 2021 that will only be rolled out widely in a few months, and another vaccine will be delayed until mid-2024: https://marginalrevolution.com/marginalrevolution/2023/10/what-is-an-emergency-the-case-of-rapid-malaria-vaccination.html
So I think it’s less a question of whether EA could have piled more money on and more a question of whether EA could have used that money + our talent advantage to target key bottlenecks.
(Plus the possibility of getting gene drives done much earlier, but I don’t know how to estimate that.)
You say this as if there were ways to respond which would have prevented this. I'm not sure these exist, and in general I think "ignore it" is a really really solid heuristic in an era where conflict drives clicks.