I work as an Advisor for 80,000 Hours, before which I worked at the Global Priorities Institute and ran Giving What We Can.
This is not quite an answer to your question, but I thought you might get a lot out of this podcast - it at least is vivid evidence that you can have a lot of impact despite finding it hard to get out of ugh fields and suffering from depression.
I agree the finance example is useful. I would expect that in both our case and the finance case the best implementation isn't actually mutually exclusive funds, but funds with clear and explicit 'central cases' and assumptions, plus some sensible (and preferably explicit) heuristics to be used across funds like 'try to avoid multiple funds investing too much in the same thing'.
That seems to be both because there will (as Max suggests) often be no fact of the matter as to which fund some particular company fits in, and also because the thing you care about when investing in a financial fund is in large part profit. In the case of the healthcare and tech fund, there will be clear overlaps - firms using tech to improve healthcare. If I were investing in one or other of these funds, I would be less interested in whether some particular company is more exactly described as a 'healthcare' or 'tech' company, and care more about whether they seem to be a good example of the thing I invested in. Eg if I invested in a tech fund, presumably I think some things along the lines of 'technological advancements are likely to drive profit' and 'there are low hanging fruit in terms of tech innovations to be applied to market problems'. If some company is doing good tech innovation and making profit in the healthcare space, I'd be keen for the tech fund to invest in it. I wouldn't be that fussed about whether the healthcare fund also invested in it. Though if the healthcare fund had invested substantially in the company, presumably the price would go up and it would look like a less good option for the tech fund and by extension, for me. I'd expect it to be best for EA Funds to work similarly: set clear expectations around the kinds of thing each fund aims for and what assumptions it makes, and then worry about overlap predominantly insofar as there are large potential donations which aren't being made because some specific fund is missing (which might be a subset of a current fund, like 'non-longtermist EA infrastructure').
I would guess that EAF isn't a good option for people with very granular views about how best to do good. Analogously, if I had a lot of views about the best ways for technology companies to make a profit (for example, that technology in healthcare was a dead end) I'd often do better to fund individual companies than broad funds.
In case it doesn't go without saying, I think it's extremely important to use money in accordance with the (communicated) intentions with which it was solicited. It seems very important to me that EAs act with integrity and are considerate of others.
Thanks for finding and pasting Jonas' reply to this concern MichaelA. I don't feel I have further information to add to it. One way to frame my plans: I intend to fund projects which promote EA principles, where both 'promote' and 'EA principles' may be understood in a number of different ways. I can imagine the projects aiming at both the long-run future and at helping current beings. It's hard to comment in detail since I don't yet know what projects will apply.
Here are a few things:
Speaking for myself, I'm interested in increasing the detail in my write-ups a little over the medium term (perhaps making them typically more the length of the write up for Stefan Schubert). I doubt I'll go all the way to making them as comprehensive as Max's. Pros:
I expect to try to include considerations in my write ups which might be found in write ups of types of opportunity. I don't expect to produce the kind of lengthy write ups that come to mind when you mention reports.
I would guess that the length of my write ups going forward will depend on various things, including how much impact they seem to be having (eg how much useful feedback I get from them that informs my thinking, and how useful people seem to be finding them in deciding what projects to do / whether to apply to the fund etc).
Answering these thoroughly would be really tricky, but here are a few off-the-cuff thoughts: 1. Tough to tell. My intuition is 'the same amount as I did' because I was happy with the amount I could grant to each of the recipients I granted to, and I didn't have time to look at more applications than I did. Otoh I could imagine if we the fund had significantly more funding that would seem to provide a stronger mandate for trying things out and taking risks, so maybe that would have inclined me to spend less time evaluating each grant and use some money to do active grant making, or maybe would have inclined me to have funded one or two of the grants that I turned down. I also expect to be less time constrained in future because we won't be doing an entire quarter's grants in one round, and because there will be less 'getting up to speed'.2. Probably most of these are some bottleneck, and also they interact: - I had pretty limited capacity this round, and hope to have more in future. Some of that was also to do with not knowing much about some particular space and the plausible interventions in that space, so was a knowledge constraint. Some was to do with finding the most efficient way to come to an answer.- It felt to me like there was some bottleneck of great applicants with great proposals. Some proposals stood out fairly quickly as being worth funding to me, so I expect to have been able to fund more grants had there been more of these. It's possible some grants we didn't fund would have seemed worth funding had the proposal been clearer / more specific. - There were macrostrategic questions the grant makers disagreed over - for example, the extent to which people working in academia should focus on doing good research of their own versus encourage others to do relevant research. There are also such questions that I think didn't affect any of our grants this time but I expect to in future, such as how to prioritise spreading ideas like 'you can donate extremely cost-effectively to these global health charities' versus more generalised EA principles.
3. The proportion of good applications was fairly high compared to my expectation (though ofc the fewer applications we reject the faster we can give out grants, so until we're granting to everyone who applies, there's always a sense in which the proportion of good applications is bottlenecking us). The proportion of applications that seemed pretty clearly great, well thought through and ready to go as initially proposed, and which the committee agreed on, seemed maybe lower than I might have expected.
4. I think I noticed some of each of these, and it's a little tough to say because the better the applicant, the more likely they are to come up with good ideas and also to be well calibrated on their fit with the idea. If I could dial up just one of these, probably it would be quality of idea.
5. One worry I have is that many people who do well early in life are encouraged to do fairly traditional things - for example they get offered good jobs and scholarships to go down set career tracks. By comparison, people who come into their own later on (eg late in university) are more in a position to be thinking independently about what to work on. Therefore my sense is that community building in general is systematically missing out on some of the people who would be best at it because it's a kind of weird, non-standard thing to work on. So I guess I lean on the side of too few people interested in EA infrastructure stuff.
No set plans yet.
Thanks for the feedback!
I basically agree with the conclusion MichaelA and Ben Pace have below. I think EAIF’s scope could do with being a bit more clearly defined, and we’ll be working on that. Otoh, I see the Lohmar and CLTR grants as fitting fairly clearly into the ‘Fund scope’ as pasted by MichaelA below. Currently, grants do get passed from one fund to the other, but that happens mostly when the fund they initially applied to deems them not to fall easily into their scope, rather than if they seem to fall centrally into the scope of the fund they apply for and also another fund. My view is that CLTR, for example, is good example of increasing the extent to which policy makers are likely to use EA principles when making decisions, which makes it seem like a good example of the kind of thing I think EAIF should be funding.
I think that there are a number of ways in which someone might disagree: One is that they might think that ‘EA infrastructure’ should be to do with building the EA _community_ specifically, rather than being primarily concerned with people outside community. Another is that they might want EAIF to only fund organisations which have the same portfolio of cause activities as is representative of the whole EA movement. I think it would be worse to narrow the fund’s scope in either of these ways, though I think your comment highlights that we could do with being clearer about it not being limited in that way.
Over the long run, I do think the fund should aim to support projects which represent different ways of understanding and framing EA principles, and which promote different EA principles to different extents. I think one way in which this fund pay out looks less representative than it felt to me is that there was a grant application for an organisation which was mostly fundraising for global development and animal welfare which didn’t get funded due to getting funding from elsewhere while we were deliberating.
The scope of the EAIF is likely to continue overlapping in some uneasy ways with the other funds. My instinct would be not to be too worried about that, as long as we’re clear about what kinds of things we’re aiming at funding and do fund. But it would be interesting to hear other people’s hunches about the importance of the funds being mutually exclusive in terms of remit.
Speaking just for myself: I don’t think I could currently define a meaningful ‘minimum absolute bar’. Having said that, the standard most salient to me is often ‘this money could have gone to anti-malaria bednets to save lives’. I think (at least right now) it’s not going to be that useful to think of EAIF as a cohesive whole with a specific bar, let alone explicit criteria for funding. A better model is a cluster of people with different understandings of ways we could be improving the world which are continuously updating, trying to figure out where we think money will do the most good and whether we’ll find better or worse opportunities in the future.
Here are a couple of things pushing me to have a low-ish bar for funding:
Here are a couple of things driving up my bar: