Holly Morgan

1501 karmaJoined Aug 2016Working (6-15 years)


Founded Giving What We Can Oxford (now EA Oxford) with Max D. (2010)
Founding team of CEA (now Effective Ventures) (2011)
Ran The Life You Can Save with Peter Singer (2012)
Graduated (2013)
Ventured out of the EA bubble (2014-2017)
Ran EA London with David Nash (2018)
Had some fun with Daoism (2019-2020)
Supporting EAs in various ways (2021+)


With apologies for not managing to be quite as eloquent/professional as the others: I have nothing but love, respect and gratitude for you, Nick; you've always been so warm, insightful and supportive. I may always think of you primarily as one of the three founding pillars of CEA/EV, but I'm excited to see what you do next :-)

I like this post so much that I'm buying you a sandwich (check your email).

Thank you for sharing! I love hearing "origin stories" from like-minded people and I found this post both clear and inspiring :-)

There's also an EA for Christians group - if you haven't already come across them, might be worth checking out!

Thanks for adding a bio, Wes, and welcome!

Feel free to reach out to me for any "help with the 8-week course on 80,000 hours" :-)

Thanks Toby - so, so exciting to see this work progressing!

One quibble:

The value of advancements and speed-ups depends crucially on whether they also bring forward the end of humanity. When they do, they have negative value

...when the area under the graph is mostly above the horizontal axis?

Even if you assign a vanishingly small probability to future trajectories in which the cumulative value of humanity/sentientkind is below zero, I imagine many of the intended users of this framework will at least sometimes want to model the impact of interventions in worlds where the default trajectory is negative (e.g. when probing the improbable)?

Maybe this is another 'further development' consciously left to others and I don't know how much the adjustment would meaningfully change things anyway - I admit I've only skimmed the chapter! But I find it interesting that, for example, when you include the possibility of negative default trajectories, the more something looks like a 'speed-up with an endogenous end time' the less robustly bad it is (as you note), whereas the more it looks like a 'gain' the more robustly good it is .

I also hope that some of the (what I perceive to be) silent majority will chime in and demonstrate that we’re here and don’t want to see EA splintered, rebranded, or otherwise demoted in favor of some other label.


This is one of my favourite posts on this forum and I imagine the large majority of EAs I know IRL would largely agree with it (although there's definitely a selection bias there). Thank you! I feel like there have been several moments in the past year or so where I've been like, "Man, EA NYC seems really cool."

Re "best EA win," I couldn't pick a favourite but here's one I learnt a few hours ago: Eitan Fischer - who I remember from early CEA days when he founded Animal Charity Evaluators - now runs the cultivated meat company Mission Barns. The Guardian says, "[A] handful of outlets have agreed to stock its products once they are approved for sale." 🥳

All done :-) (already had a solar/crank charger+radio). Thank you!

Huh, maybe not.

Might be worth buying a physical copy of The Knowledge too (I just have).

And if anyone's looking for a big project...

If we take catastrophic risks seriously and want humanity to recover from a devastating shock as far and fast as possible, producing such a guide before it’s too late might be one of the higher-impact projects someone could take on.

That was my first thought, but I expect many other individuals/institutions have already made large efforts to preserve such info, whereas this is probably the only effort to preserve core EA ideas (at least in one place)? And it looks like the third folder - "Non-EA stuff for the post-apocalypse" - contains at least some of the elementary resources you have in mind here.

But yeah, I'm much more keen to preserve arguments for radical empathy, scout mindset, moral uncertainty etc. than, say, a write-up of the research behind HLI's charity recommendations. Maybe it would also be good to have an even small folder within "Main content (3GB)" with just the core ideas; the "EA Handbook" (39MB) sub-folder could perhaps serve such a purpose in the meantime.

Anyway, cool project! I've downloaded :)

Asking for a friend - will email now :)

Load more