Pursuing a graduate degree (e.g. Master's)
Working (0-5 years experience)
820Joined Dec 2020


Broadly interested in AI governance (research, policy, and strategy), longtermist research, and EA strategy/prioritisation.

My blog: https://www.hazell.substack.com

MSc student in Social Science of the Internet @ University of Oxford

Previous: Content & Research Associate @ Giving What We Can

Current: Summer Research Fellow @ GovAI



That phrasing is better, IMO. Thanks Michael.

I think the debate between HLI and GW is great. I've certainly learned a lot, and have slightly updated my views about where I should give. I agree that competition between charities (and charity evaluators) is something to strive for, and I hope HLI keeps challenging GiveWell in this regard.

Thanks for the post Michael — these sorts of posts have been very helpful for making me a more informed donor. I just want to point out one minor thing though.

I appreciate you and your team's work and plan on donating part of my giving season donations to either your organisation, Strongminds, or a combination of both. But I did find the title of this post a bit unnecessarily adversarial to GiveWell (although it's clever, I must admit).

I've admired the fruitful, polite, and productive interactions between GW and HLI in the past and therefore I somewhat dislike the tone struck here.


I also think another similar bonus is that prizes can sometimes get people to do EA things who otherwise wouldn’t have done EA things counterfactually.

E.g., some prize on alignment work could plausibly be done by computer scientists who otherwise would be doing other things.

This could signal boost EA/the cause area more generally, which is good.

I feel like this question is so much more fun if we can include dead people, so I’m gonna do just that.

Off the top of my head:

  • Isaac Newton
  • John Forbes Nash
  • John von Neumann
  • Alan Turing
  • Amos Tversky
  • Ada Lovelace
  • Leonhard Euler
  • Terence Tao
  • John Stuart Mill
  • Eliezer Yudkowsky
  • Herbert Simon

This is a very cool model and I would absolutely be thrilled to see someone write up a post about it!

It seems like there is a quality and quantity trade-off where you could grow EA faster by expecting less engagement or commitment. I think there's a lot of value in thinking about how to make EA massively scale. For example, if we wanted to grow EA to millions of people maybe we could lower the barrier to entry somehow by having a small number of core ideas or advertising low-commitment actions such as earning to give. I think scaling up the number of people massively would benefit the most scalable charities such as GiveDirectly.

I suppose this mostly has to do with growing the size of the "EA community", whereas I'm mostly thinking about growing the size of "people doing effectively altruistic things". There's a big difference in the composition of those groups. I also think there is a trade-off in terms of how community building resources are spent, but the thing about trying to encourage influence is that it doesn't need to trade-off with highly engaged EAs. One analogy is that encouraging people to donate 10% doesn't mean that someone like SBF can't pledge 99%.

The counterargument is that impact per person tends to be long-tailed. For example, the net worth of Sam Bankman Fried is ~100,000 higher than a typical person. Therefore, who is in EA might matter as much or more as how many EAs there are.

Yup, agreed. This is my model as well. That being said, I wouldn't be surprised if the impact of influence  also follows a long-tailed distribution: imagine if we manage to influence 1,000 people about the importance of AI-related x-risk, and one of them actually ends up being the one to push for some highly impactful policy change.

It's not clear to me whether quality or quantity is more important because some of the benefits are hard to quantify. One easily measurable metric is donations: adding a sufficiently large number of average donators should have the same financial value as adding a single billionaire.

Agreed. I'm similarly fuzzy on this and would really appreciate if someone did more analysis on this rather than deferring to the meme that EA is growing too fast/slow.

I think that the value is going to vary hugely by the cause area and the exact ask.

For global poverty, anyone can donate money to buy malaria net, though it's worth remembering that Dustin Moskovitz is worth a crazy number of low-value donors.

For AI Safety, it's actually surprisingly tricky to find robustly net-positive actions we can pursue. Unfortunately it would be very easy to lobby a politician to pass legislation, which then makes the situation worse. Or to persuade voters this is an important issue, but then have them voting for things that sound good rather than things that solve the issue.


For global health & development, I think it is still quite useful to have influence over things like research and policy prioritisation (what topics academics should research, and what areas of policy  think tanks should focus on), government foreign aid budgets, vaccine r&d, etc. This is tangential, but even if Dustin is worth a large number of low-value donors (he is), the marginal donation to effective global poverty charities is still very impactful.

For AI, I agree that it is tricky to find robustly net-positive actions, as of right now at least. I expect this to change over the next few years, and I hope people in relevant positions to implement these actions will be ready to do so once we have more clarity about which ones are good. Whether or not they're highly engaged EAs doesn't seem to matter inasmuch as they actually do the things, IMO.

Thank you for the work you and your team do, Julia. Many of these situations are incredibly tricky to handle, and I’m very grateful the EA community has people working on them.

Here is a first stab I took at organising some pieces of content that would be good to test your fit for this kind of work. I tried to balance it as much as I could with respect to length, difficulty, format, and cause area.

Load More