I work as an Advisor for 80,000 Hours, before which I worked at the Global Priorities Institute and ran Giving What We Can.

Wiki Contributions


Undergraduate Making Life-Altering Choices While Sober, Please Advise

This is not quite an answer to your question, but I thought you might get a lot out of this podcast - it at least is vivid evidence that you can have a lot of impact despite finding it hard to get out of ugh fields and suffering from depression. 

EA Infrastructure Fund: May 2021 grant recommendations

I agree the finance example is useful. I would expect that in both our case and the finance case the best implementation isn't actually mutually exclusive funds, but funds with clear and explicit 'central cases' and assumptions, plus some sensible (and preferably explicit) heuristics to be used across funds like 'try to avoid multiple funds investing too much in the same thing'. 

That seems to be both because there will (as Max suggests) often be no fact of the matter as to which fund some particular company fits in, and also because the thing you care about when investing in a financial fund is in large part profit. In the case of the healthcare and tech fund, there will be clear overlaps - firms using tech to improve healthcare. If I were investing in one or other of these funds, I would be less interested in whether some particular company is more exactly described as a 'healthcare' or 'tech' company, and care more about whether they seem to be a good example of the thing I invested in. Eg if I invested in a tech fund, presumably I think some things along the lines of 'technological advancements are likely to drive profit' and 'there are low hanging fruit in terms of tech innovations to be applied to market problems'. If some company is doing good tech innovation and making profit in the healthcare space, I'd be keen for the tech fund to invest in it. I wouldn't be that fussed about whether the healthcare fund also invested in it. Though if the healthcare fund had invested substantially in the company, presumably the price would go up and it would look like a less good option for the tech fund and by extension, for me. I'd expect it to be best for EA Funds to work similarly: set clear expectations around the kinds of thing each fund aims for and what assumptions it makes, and then worry about overlap predominantly insofar as there are large potential donations which aren't being made because some specific fund is missing  (which might be a subset of a current fund, like 'non-longtermist EA infrastructure').  

I would guess that EAF isn't a good option for people with very granular views about how best to do good. Analogously, if I had a lot of views about the best ways for technology companies to make a profit (for example, that technology in healthcare was a dead end) I'd often do better to fund individual companies than broad funds. 

In case it doesn't go without saying, I think it's extremely important to use money in accordance with the (communicated) intentions with which it was solicited. It seems very important to me that EAs act with integrity and are considerate of others

EA Infrastructure Fund: May 2021 grant recommendations

Thanks for finding and pasting Jonas' reply to this concern MichaelA. I don't feel I have further information to add to it. One way to frame my plans: I intend to fund projects which promote EA principles, where both 'promote' and 'EA principles' may be understood in a number of different ways. I can imagine the projects aiming at both the long-run future and at helping current beings. It's hard to comment in detail since I don't yet know what projects will apply. 

EA Infrastructure Fund: Ask us anything!

Here are a few things: 

  • What proportion of the general population might fully buy in to EA principles if they came across them in the right way, and what proportion of people might buy in to some limited version (eg become happy to donate to evidence backed global poverty interventions)? I’ve been pretty surprised how much traction ‘EA’ as an overall concept has gotten. Whereas I’ve maybe been negatively surprised by some limited version of EA not getting more traction than it has. These questions would influence how excited I am about wide outreach, and about how much I think it should be optimising for transmitting a large number of ideas vs simply giving people an easy way to donate to great global development charities.
  • How much and in which cases research is translated into action. I have some hypothesis that it’s often pretty hard to translate research into action. Even in cases where someone is deliberating between actions and someone else in another corner of the community is researching a relevant consideration, I think it’s difficult to bring these together. I think maybe that inclines me towards funding more ‘getting things done’ and less research than I might naturally be tempted to. (Though I’m probably pretty far on the ‘do more research’ side to start with.) It also inclines me to fund things that might seem like good candidates for translating research into action.
  • How useful influencing academia is. On the one hand, there are a huge number of smart people in academia, who would like to spend their careers finding out the truth. Influencing them towards prioritising research based on impact seems like it could be really fruitful. On the other hand, it’s really hard to make it in academia, and there are strong incentives in place there, which don’t point towards impact. So maybe it would be more impactful for us to encourage people who want to do impactful work to leave academia and be able to focus their research purely on impact. Currently the fund managers have somewhat different intuitions on this question.
EA Infrastructure Fund: Ask us anything!

Speaking for myself, I'm interested in increasing the detail in my write-ups a little over the medium term (perhaps making them typically more the length of the write up for Stefan Schubert). I doubt I'll go all the way to making them as comprehensive as Max's. 

  • Particularly useful for donors to the fund and potential applicants to get to know the reasoning processes grant makers when we've just joined and haven't yet made many grants
  • Getting feedback from others on what parts of my reasoning process in making grants seem better and worse seems more likely to be useful than simply feedback on 'this grant was one I would / wouldn't have made' 


  • Time writing reports trades against time evaluating grants. The latter seems more important to me at the current margin. That's partly because I'd have liked to have decidedly more time than I had for evaluating grants and perhaps for seeking out people I think would make good grantees.
  • I find it hard to write up grants in great detail in a way that's fully accurate and balanced without giving grantees public negative feedback. I'm hesitant to do much of that, and when I do it, want to do it very sensitively.

I expect to try to include considerations in my write ups which might be found in write ups of types of opportunity. I don't expect to produce the kind of lengthy write ups that come to mind when you mention reports.

I would guess that the length of my write ups going forward will depend on various things, including how much impact they seem to be having (eg how much useful feedback I get from them that informs my thinking, and how useful people seem to be finding them in deciding what projects to do / whether to apply to the fund etc).

EA Infrastructure Fund: Ask us anything!

Answering these thoroughly would be really tricky, but here are a few off-the-cuff thoughts: 

1. Tough to tell. My intuition is 'the same amount as I did' because I was happy with the amount I could grant to each of the recipients I granted to, and I didn't have time to look at more applications than I did. Otoh I could imagine if we the fund had significantly more funding that would seem to provide a stronger mandate for trying things out and taking risks, so maybe that would have inclined me to spend less time evaluating each grant and use some money to do active grant making, or maybe would have inclined me to have funded one or two of the grants that I turned down. I also expect to be less time constrained in future because we won't be doing an entire quarter's grants in one round, and because there will be less 'getting up to speed'.

2. Probably most of these are some bottleneck, and also they interact: 
- I had pretty limited capacity this round, and hope to have more in future. Some of that was also to do with not knowing much about some particular space and the plausible interventions in that space, so was a knowledge constraint. Some was to do with finding the most efficient way to come to an answer.
- It felt to me like there was some bottleneck of great applicants with great proposals. Some proposals stood out fairly quickly as being worth funding to me, so I expect to have been able to fund more grants had there been more of these. It's possible some grants we didn't fund would have seemed worth funding had the proposal been clearer / more specific. 
- There were macrostrategic questions the grant makers disagreed over - for example, the extent to which people working in academia should focus on doing good research of their own versus encourage others to do relevant research. There are also such questions that I think didn't affect any of our grants this time but I expect to in future, such as how to prioritise spreading ideas like 'you can donate extremely cost-effectively to these global health charities' versus more generalised EA principles.  

3. The proportion of good applications was fairly high compared to my expectation (though ofc the fewer applications we reject the faster we can give out grants, so until we're granting to everyone who applies, there's always a sense in which the proportion of good applications is bottlenecking us). The proportion of applications that seemed pretty clearly great, well thought through and ready to go as initially proposed, and which the committee agreed on, seemed maybe lower than I might have expected. 

4. I think I noticed some of each of these, and it's a little tough to say because the better the applicant, the more likely they are to come up with good ideas and also to be well calibrated on their fit with the idea. If I could dial up just one of these, probably it would be quality of idea.

5. One worry I have is that many people who do well early in life are encouraged to do fairly traditional things - for example they get offered good jobs and scholarships to go down set career tracks. By comparison, people who come into their own later on (eg late in university) are more in a position to be thinking independently about what to work on. Therefore my sense is that community building in general is systematically missing out on some of the people who would be best at it because it's a kind of weird, non-standard thing to work on. So I guess I lean on the side of too few people interested in EA infrastructure stuff.

EA Infrastructure Fund: May 2021 grant recommendations

Thanks for the feedback! 

I basically agree with the conclusion MichaelA and Ben Pace have below. I think EAIF’s scope could do with being a bit more clearly defined, and we’ll be working on that. Otoh, I see the Lohmar and CLTR grants as fitting fairly clearly into the ‘Fund scope’ as pasted by MichaelA below. Currently, grants do get passed from one fund to the other, but that happens mostly when the fund they initially applied to deems them not to fall easily into their scope, rather than if they seem to fall centrally into the scope of the fund they apply for and also another fund. My view is that CLTR, for example, is good example of increasing the extent to which policy makers are likely to use EA principles when making decisions, which makes it seem like a good example of the kind of thing I think EAIF should be funding. 

I think that there are a number of ways in which someone might disagree: One is that they might think that ‘EA infrastructure’ should be to do with building the EA _community_ specifically, rather than being primarily concerned with people outside community. Another is that they might want EAIF to only fund organisations which have the same portfolio of cause activities as is representative of the whole EA movement. I think it would be worse to narrow the fund’s scope in either of these ways, though I think your comment highlights that we could do with being clearer about it not being limited in that way. 

Over the long run, I do think the fund should aim to support projects which represent different ways of understanding and framing EA principles, and which promote different EA principles to different extents. I think one way in which this fund pay out looks less representative than it felt to me is that there was a grant application for an organisation which was mostly fundraising for global development and animal welfare which didn’t get funded due to getting funding from elsewhere while we were deliberating. 

The scope of the EAIF is likely to continue overlapping in some uneasy ways with the other funds. My instinct would be not to be too worried about that, as long as we’re clear about what kinds of things we’re aiming at funding and do fund. But it would be interesting to hear other people’s hunches about the importance of the funds being mutually exclusive in terms of remit.

EA Infrastructure Fund: Ask us anything!

Speaking just for myself: I don’t think I could currently define a meaningful ‘minimum absolute bar’. Having said that, the standard most salient to me is often ‘this money could have gone to anti-malaria bednets to save lives’. I think (at least right now) it’s not going to be that useful to think of EAIF as a cohesive whole with a specific bar, let alone explicit criteria for funding. A better model is a cluster of people with different understandings of ways we could be improving the world which are continuously updating, trying to figure out where we think money will do the most good and whether we’ll find better or worse opportunities in the future.

Here are a couple of things pushing me to have a low-ish bar for funding: 

  • I think EA currently has substantially more money than it has had in the past, but hasn’t progressed as fast in figuring out how to turn that into improving the world. That makes me inclined to fund things and see how they go.
  • As a new committee, it seems pretty good to fund some things, make predictions, and see how they pan out. 
  • I’d prefer EA to be growing faster than it currently is, so funding projects now rather than saving the money to try to find better projects in future looks good to me.  

Here are a couple of things driving up my bar:

  • EAIF gets donations from a broad range of people. It seems important for all the donations to be at least somewhat explicable to the majority of its donors. This makes me hesitant to fund more speculative things than I would be with my money, and to stick more closely to ‘central cases’ of infrastructure building than I otherwise would. This seems particularly challenging for this fund, since its remit is a bit esoteric, and not yet particularly clearly defined. (As evidenced by comments on the most recent grant report, I didn’t fully succeed in this aim this time round.)
  • Something particularly promising which I don’t fund is fairly likely to get funded by others, whereas something harmful I fund can’t be cancelled by others, so I want to be fairly cautious while I’m starting out in grant making.
Load More