I work as an Advisor for 80,000 Hours, before which I worked at the Global Priorities Institute and ran Giving What We Can.

Wiki Contributions


Evidence from two studies of EA careers advice interventions

Thanks for taking the time to do such a rigorous study, and also for writing it up and thinking through the implications for other EAs!

80,000 Hours one-on-one team plans, plus projects we’d like to see

Thanks for this feedback. I had a go at rewriting that our 'why wasn't I accepted' FAQ. It now reads: 

Why wasn’t I accepted?

We sincerely regret that we can’t advise everyone who applies. We read every application individually and are thankful that you took the time to apply. It’s really touching reading about people who have come across 80,000 Hours and are excited about using their careers to help others.

We aim to talk to the people we think we can help most. Our not speaking with you does not mean we think you won’t have a highly impactful career. Whether we can be helpful to you sometimes depends on contingent factors like whether one of our advisers happens to know of a role or introduction right now that might be a good fit for you. We also have far less information about you than you do, so we aren’t even necessarily making the right calls about who we can help most.

You’re very welcome to reapply, particularly if your situation changes. If you’re thinking of doing so, it might be worth reading our key ideas series and trying out our career planning process, which we developed to help people think through their career decisions. You can also get involved in our community to get help from other people trying to do good with their careers.

Undergraduate Making Life-Altering Choices While Sober, Please Advise

This is not quite an answer to your question, but I thought you might get a lot out of this podcast - it at least is vivid evidence that you can have a lot of impact despite finding it hard to get out of ugh fields and suffering from depression. 

EA Infrastructure Fund: May 2021 grant recommendations

I agree the finance example is useful. I would expect that in both our case and the finance case the best implementation isn't actually mutually exclusive funds, but funds with clear and explicit 'central cases' and assumptions, plus some sensible (and preferably explicit) heuristics to be used across funds like 'try to avoid multiple funds investing too much in the same thing'. 

That seems to be both because there will (as Max suggests) often be no fact of the matter as to which fund some particular company fits in, and also because the thing you care about when investing in a financial fund is in large part profit. In the case of the healthcare and tech fund, there will be clear overlaps - firms using tech to improve healthcare. If I were investing in one or other of these funds, I would be less interested in whether some particular company is more exactly described as a 'healthcare' or 'tech' company, and care more about whether they seem to be a good example of the thing I invested in. Eg if I invested in a tech fund, presumably I think some things along the lines of 'technological advancements are likely to drive profit' and 'there are low hanging fruit in terms of tech innovations to be applied to market problems'. If some company is doing good tech innovation and making profit in the healthcare space, I'd be keen for the tech fund to invest in it. I wouldn't be that fussed about whether the healthcare fund also invested in it. Though if the healthcare fund had invested substantially in the company, presumably the price would go up and it would look like a less good option for the tech fund and by extension, for me. I'd expect it to be best for EA Funds to work similarly: set clear expectations around the kinds of thing each fund aims for and what assumptions it makes, and then worry about overlap predominantly insofar as there are large potential donations which aren't being made because some specific fund is missing  (which might be a subset of a current fund, like 'non-longtermist EA infrastructure').  

I would guess that EAF isn't a good option for people with very granular views about how best to do good. Analogously, if I had a lot of views about the best ways for technology companies to make a profit (for example, that technology in healthcare was a dead end) I'd often do better to fund individual companies than broad funds. 

In case it doesn't go without saying, I think it's extremely important to use money in accordance with the (communicated) intentions with which it was solicited. It seems very important to me that EAs act with integrity and are considerate of others

EA Infrastructure Fund: May 2021 grant recommendations

Thanks for finding and pasting Jonas' reply to this concern MichaelA. I don't feel I have further information to add to it. One way to frame my plans: I intend to fund projects which promote EA principles, where both 'promote' and 'EA principles' may be understood in a number of different ways. I can imagine the projects aiming at both the long-run future and at helping current beings. It's hard to comment in detail since I don't yet know what projects will apply. 

EA Infrastructure Fund: Ask us anything!

Here are a few things: 

  • What proportion of the general population might fully buy in to EA principles if they came across them in the right way, and what proportion of people might buy in to some limited version (eg become happy to donate to evidence backed global poverty interventions)? I’ve been pretty surprised how much traction ‘EA’ as an overall concept has gotten. Whereas I’ve maybe been negatively surprised by some limited version of EA not getting more traction than it has. These questions would influence how excited I am about wide outreach, and about how much I think it should be optimising for transmitting a large number of ideas vs simply giving people an easy way to donate to great global development charities.
  • How much and in which cases research is translated into action. I have some hypothesis that it’s often pretty hard to translate research into action. Even in cases where someone is deliberating between actions and someone else in another corner of the community is researching a relevant consideration, I think it’s difficult to bring these together. I think maybe that inclines me towards funding more ‘getting things done’ and less research than I might naturally be tempted to. (Though I’m probably pretty far on the ‘do more research’ side to start with.) It also inclines me to fund things that might seem like good candidates for translating research into action.
  • How useful influencing academia is. On the one hand, there are a huge number of smart people in academia, who would like to spend their careers finding out the truth. Influencing them towards prioritising research based on impact seems like it could be really fruitful. On the other hand, it’s really hard to make it in academia, and there are strong incentives in place there, which don’t point towards impact. So maybe it would be more impactful for us to encourage people who want to do impactful work to leave academia and be able to focus their research purely on impact. Currently the fund managers have somewhat different intuitions on this question.
EA Infrastructure Fund: Ask us anything!

Speaking for myself, I'm interested in increasing the detail in my write-ups a little over the medium term (perhaps making them typically more the length of the write up for Stefan Schubert). I doubt I'll go all the way to making them as comprehensive as Max's. 

  • Particularly useful for donors to the fund and potential applicants to get to know the reasoning processes grant makers when we've just joined and haven't yet made many grants
  • Getting feedback from others on what parts of my reasoning process in making grants seem better and worse seems more likely to be useful than simply feedback on 'this grant was one I would / wouldn't have made' 


  • Time writing reports trades against time evaluating grants. The latter seems more important to me at the current margin. That's partly because I'd have liked to have decidedly more time than I had for evaluating grants and perhaps for seeking out people I think would make good grantees.
  • I find it hard to write up grants in great detail in a way that's fully accurate and balanced without giving grantees public negative feedback. I'm hesitant to do much of that, and when I do it, want to do it very sensitively.

I expect to try to include considerations in my write ups which might be found in write ups of types of opportunity. I don't expect to produce the kind of lengthy write ups that come to mind when you mention reports.

I would guess that the length of my write ups going forward will depend on various things, including how much impact they seem to be having (eg how much useful feedback I get from them that informs my thinking, and how useful people seem to be finding them in deciding what projects to do / whether to apply to the fund etc).

EA Infrastructure Fund: Ask us anything!

Answering these thoroughly would be really tricky, but here are a few off-the-cuff thoughts: 

1. Tough to tell. My intuition is 'the same amount as I did' because I was happy with the amount I could grant to each of the recipients I granted to, and I didn't have time to look at more applications than I did. Otoh I could imagine if we the fund had significantly more funding that would seem to provide a stronger mandate for trying things out and taking risks, so maybe that would have inclined me to spend less time evaluating each grant and use some money to do active grant making, or maybe would have inclined me to have funded one or two of the grants that I turned down. I also expect to be less time constrained in future because we won't be doing an entire quarter's grants in one round, and because there will be less 'getting up to speed'.

2. Probably most of these are some bottleneck, and also they interact: 
- I had pretty limited capacity this round, and hope to have more in future. Some of that was also to do with not knowing much about some particular space and the plausible interventions in that space, so was a knowledge constraint. Some was to do with finding the most efficient way to come to an answer.
- It felt to me like there was some bottleneck of great applicants with great proposals. Some proposals stood out fairly quickly as being worth funding to me, so I expect to have been able to fund more grants had there been more of these. It's possible some grants we didn't fund would have seemed worth funding had the proposal been clearer / more specific. 
- There were macrostrategic questions the grant makers disagreed over - for example, the extent to which people working in academia should focus on doing good research of their own versus encourage others to do relevant research. There are also such questions that I think didn't affect any of our grants this time but I expect to in future, such as how to prioritise spreading ideas like 'you can donate extremely cost-effectively to these global health charities' versus more generalised EA principles.  

3. The proportion of good applications was fairly high compared to my expectation (though ofc the fewer applications we reject the faster we can give out grants, so until we're granting to everyone who applies, there's always a sense in which the proportion of good applications is bottlenecking us). The proportion of applications that seemed pretty clearly great, well thought through and ready to go as initially proposed, and which the committee agreed on, seemed maybe lower than I might have expected. 

4. I think I noticed some of each of these, and it's a little tough to say because the better the applicant, the more likely they are to come up with good ideas and also to be well calibrated on their fit with the idea. If I could dial up just one of these, probably it would be quality of idea.

5. One worry I have is that many people who do well early in life are encouraged to do fairly traditional things - for example they get offered good jobs and scholarships to go down set career tracks. By comparison, people who come into their own later on (eg late in university) are more in a position to be thinking independently about what to work on. Therefore my sense is that community building in general is systematically missing out on some of the people who would be best at it because it's a kind of weird, non-standard thing to work on. So I guess I lean on the side of too few people interested in EA infrastructure stuff.

Load More