Currently: Applied Researcher at Founders Pledge, Assistant Fund Manager at EAIF
Previously: Visiting Fellow at Rethink Priorities, PPE student at Warwick, EA Warwick Fellowship co-ordinator.
Rob Besinger of MIRI tweets:
...I'm happy to say that MIRI leadership thinks "humanity never builds AGI" would be the worst catastrophe in history, would cost nearly all of the future's value, and is basically just unacceptably bad as an option.
Just to add that the Research Institute for Future Design (RIFD) is a Founders Pledge recommendation for longtermist institutional reform
(disclaimer: I am a researcher at Founders Pledge)
OpenPhil might be in a position to expand EA’s expected impact if it added a cause area that allowed for more speculative investments in Global Health & Development.
My impression is that Open Philanthropy's Global Health and Development team already does this? For example, OP has focus areas on Global aid policy, Scientific research and South Asian air quality, areas which are inherently risky/uncertain.
They have also take a hit based approach philosophically, and this is what distinguishes them from GiveWell - see e.g.
Hits. We are explicitly pursuing a hits-based approach to philanthropy with much of this work, and accordingly might expect just one or two “hits” from our portfolio to carry the whole. In particular, if one or two of our large science grants ended up 10x more cost-effective than GiveWell’s top charities, our portfolio to date would cumulatively come out ahead. In fact, the dollar-weighted average of the 33 BOTECs we collected above is (modestly) above the 1,000x bar, reflecting our ex ante assessment of that possibility. But the concerns about the informational value of those BOTECs remain, and most of our grants seems noticeably less likely to deliver such “hits".
[Reposting my comment here from previous version]
GiveWell have looked into Global Health regulation - see more here: https://www.givewell.org/research/public-health-regulation-update-August-2021
Thanks for writing this Lizka! I agree with many of the points in this [I was also a visiting fellow on the longtermist team this summer]. I'll throw my two cents in about my own reflections (I broadly share Lizka's experience, so here I just highlight the upsides/downsides things that especially resonated with me, or things unique to my own situation):
Vague background:
Upsides:
Downsides:
If it's helpful, I might write-up a shortform on some of these points in more depth, especially the things I learnt about being a better researcher, if that's helpful for others.
Overall, I also really enjoyed my time at RP, and would highly recommend :)
(I did not speak to anyone at RP before writing this).
This is super helpful, thank you!
Which departments/roles do you think are most important to work in from an EA perspective? The Cabinet Office, HM Treasury and FCDO seem particularly impactful, but are also the most crowded and competitive. Are there lesser known departments doing neglected but important work? (e.g. my impression is DEFRA would be this for animal welfare policy - are there similar opportunities in other cause areas?). Thanks!
I think this could be an interesting avenue to explore. One very basic way to (very roughly) do this is to model p(doom) effectively as a discount rate. This could be an additional user input on GiveWell's spreadsheets.
So for example, if your p(doom) is 20% in 20 years, then you could increase the discount rate by roughly 1% per year
[Techinically this will be somewhat off since (I'm guessing) most people's p(doom) doesn't increase at a constant rate, in the way a fixed discount rate does.]