Hi everyone!
Managers of the EA Infrastructure Fund will be available for an Ask Me Anything session. We'll start answering questions on Friday, June 4th, though some of us will only be able to answer questions the week after. Nevertheless, if you would like to make sure that all fund managers can consider your question, you might want to post it before early UK time on Friday morning.
What is the EA Infrastructure Fund?
The EAIF is one of the four EA Funds. While the other three Funds support direct work on various causes, this Fund supports work that could multiply the impact of direct work, including projects that provide intellectual infrastructure for the effective altruism community, run events, disseminate information, or fundraise for effective charities.
Who are the fund managers, and why might you want to ask them questions?
The fund managers are Max Daniel, Michelle Hutchinson, and Buck Shlegeris. In addition, EA Funds Executive Director Jonas Vollmer is temporarily taking on chairperson duties, advising, and voting consultatively on grants. Ben Kuhn was a guest manager in our last grant round. They will all be available for questions, though some may have spotty availability and might post their answers as they have time throughout next week.
One particular reason why you might want to ask us questions is that we are all new in these roles: All fund managers of the EAIF have recently changed, and this was our first grant round.
What happened in our most recent grant round?
We have made 26 grants totalling about $1.2 million. They include:
- Two grants totalling $139,200 to Emma Abele, James Aung, Bella Forristal, and Henry Sleight. They will work together to identify and implement new ways to support EA university groups – e.g., through high-quality introductory talks about EA and creating other content for workshops and events. University groups have historically been one of the most important sources of highly engaged EA community members, and we believe there is significant untapped potential for further growth. We are also excited about the team, based significantly on their track record – e.g., James and Bella previously led two of the globally most successful university groups.
- $41,868 to Zak Ulhaq to develop and implement workshops aimed at helping highly talented teenagers apply EA concepts and quantitative reasoning to their lives. We are excited about this grant because we generally think that educating pre-university audiences about EA-related ideas and concepts could be highly valuable; e.g., we’re aware of (unpublished) survey data indicating that in a large sample of highly engaged community members who learned about EA in the last few years, about ¼ had first heard of EA when they were 18 or younger. At the same time, this space seems underexplored. Projects that are mindful of the risks involved in engaging younger audiences therefore have a high value of information – if successful, they could pave the way for many more projects of this type. We think that Zak is a good fit for efforts in this space because he has a strong technical background and experience with both teaching and EA community building.
- $5,000 to the Czech Association for Effective Altruism to give away EA-related books to people with strong results in Czech STEM competitions, AI classes, and similar. We believe that this is a highly cost-effective way to engage a high-value audience; long-form content allows for deep understanding of important ideas, and surveys typically find books have helped many people become involved with EA (e.g., in the 2020 EA Survey, more than ⅕ of respondents said a book was important for getting them more involved).
- $248,300 to Rethink Priorities to allow Rethink to take on nine research interns (7 FTE) across various EA causes, plus support for further EA movement strategy research. We have been impressed with Rethink’s demonstrated ability to successfully grow their team while maintaining a constant stream of high-quality outputs, and think this puts them in a good position to provide growth opportunities for junior researchers. They also have a long history of doing empirical research relevant to movement strategy (e.g., the EA survey), and we are excited about their plans to build upon this track record by running additional surveys illuminating how various audiences think of EA and how responsive they are to EA messaging.
For more detail, see our payout report. It covers all grants from this round and provides more detail on our reasoning behind some of them.
The application deadline for our next grant round will be the 13th of June. After this round is wrapped up, we plan to accept rolling applications.
Ask any questions you like; we'll respond to as many as we can.
A question for the fund managers: When the EAIF funds a project, roughly how should credit should be allocated between the different involved parties, where the involved parties are:
Presumably this differs a lot between grants; I'd be interested in some typical figures.
This question is important because you need a sense of these numbers in order to make decisions about which of these parties you should try to be. Eg if the donors get 90% of the credit, then EtG looks 9x better than if they get 10%.
(I'll provide my own answer later.)
Making up some random numbers:
This is for a typical grant where someone applies to the fund with a reasonably promising project on their own and the EAIF gives them some quick advice and feedback. For a case of strong active grantmaking, I might say something more like 8% / 30% / 12% / 50%.
This is based on the reasoning that we're quite constrained by promising applications and have a lot of funding available.
(I'd be very interested in your answer if you have one btw.)
Would the EAIF be interested in a) post hoc funding of previous salary/other expenses or b) impact certificates that account for risk taken?
Some context: When I was thinking of running SF/Bay Area EA full-time*, one thing that was fairly annoying for me is that funders (correctly) were uninterested in funding me until there was demonstrated success/impact, or at least decent proxies for such. This intuition was correct, however from my perspective the risk allocation seemed asymmetric. If I did a poor job, then I eat all the costs. If I did a phenomenally good job, the best I could hope for (from a funding perspective) was a promise of continued funding for the future and maybe back payments for past work.
In the for-profit world, if you disagree with the judgement of funders, press on, and later turn out to be right, you get a greater share of the equity etc. Nothing equivalent seemed to be true within EA's credit allocation.
It seems like if you disagree with the judgment of funders, the best you can hope to do is break even. Of course, a) I read not being funded as some signal that people didn't think me/my project was sufficiently promising and b) maybe some fun... (read more)
Thanks for doing this AMA!
In the recent payout report, Max Daniel wrote:
This also seems to me like quite an important issue. It seems like reminiscent of Open Phil's idea of making grants "when they seem better than our “last dollar” (more discussion of the “last dollar” concept here), and [saving] the money instead when they don’t".
Could you (any fund managers, including but not limited to Max) say more about how you currently think about this? Subquestions include:
I feel very unsure about this. I don't think my position on this question is very well thought through.
Most of the time, the reason I don't want to make a grant doesn't feel like "this isn't worth the money", it feels like "making this grant would be costly for some other reason". For example, when someone applies for a salary to spend some time researching some question which I don't think they'd be very good at researching, I usually don't want to fund them, but this is mostly because I think it's unhealthy in various ways for EA to fund people to flail around unsuccessfully rather than because I think that if you multiply the probability of the research panning out by the value of the research, you get an expected amount of good that is worse than longtermism's last dollar.
I think this question feels less important to me because of the fact that the grants it affects are marginal anyway. I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make. And I am able to spend more time on making those best grants go better, by working on active grantmaking or by advising grantees in various ways. And coming up with a more ... (read more)
Speaking just for myself: I don’t think I could currently define a meaningful ‘minimum absolute bar’. Having said that, the standard most salient to me is often ‘this money could have gone to anti-malaria bednets to save lives’. I think (at least right now) it’s not going to be that useful to think of EAIF as a cohesive whole with a specific bar, let alone explicit criteria for funding. A better model is a cluster of people with different understandings of ways we could be improving the world which are continuously updating, trying to figure out where we think money will do the most good and whether we’ll find better or worse opportunities in the future.
Here are a couple of things pushing me to have a low-ish bar for funding:
- I think EA currently has substantially more money than it has had in the past, but hasn’t progressed as fast in figuring out how to turn that into improving the world. That makes me inclined to fund things and see how they go.
- As a new committee, it seems pretty good to fund some things, make predictions, and see how they pan out.
- I’d prefer EA to be growing faster than it currently is, so funding projects now rather than saving the money to try to fi
... (read more)Some further things pushing me towards lowering my bar:
Some further things increasing my bar:
Basically everything Jonas and Michelle have said on this sounds right to me as well.
Maybe a minor difference:
This is two misconceptions:
(1) we are hiring seven interns but they each will only be there for three months. I believe it is 1.8 FTE collectively.
(2) The grant is not being entirely allocated to intern compensation
Interns at Rethink Priorities currently earn $23-25/hr. Researchers hired on a permanent basis earn more than that, currently $63K-85K/yr (prorated for part-time work).
What % of grants you almost funded do you expect to be net negative for the world, had they counterfactually been implemented?
See paired question about grants you actually funded.
I emailed CEA with some questions about the LTFF and EAIF, and Michael Aird (MichaelA on the forum) responded about the EAIF. He said that I could post his email here. Some of the questions overlap with the contents of this AMA (among other things), but I included everything. My questions are formatted as quotes, and the unquoted passages below were written by Michael.
Basically correct. Though some decisions take longer, mainly for unusually complicated, risky, and/or large grants, or grants where the applicant decides in response to our questions that they need to revisit their plans and get back to us later. And many decisions are faster.
Basically correct, though bear in mind that that doesn't necessarily include the time spent actually doing the planning. We basically just don't want people to spend >2 hours on actually writing the application, but it'll often make sense to spend... (read more)
[I'm going to adapt some questions from myself or other people from the recent Long-Term Future Fund and Animal Welfare Fund AMAs.]
- How much do you think you would've granted in this recent round if the total funding available to the IF had been ~$5M? ~$10M? ~$20M?
- What do you think is your main bottleneck to giving more? Some possibilities that come to mind:
- Available funding
- Good applicants with good proposals for implementing good project ideas
- And to the extent that this is your bottleneck, do yo
- Grantmaker capacity to evaluate applications
- Maybe this should capture both whether they have time and whether they have techniques or abilities to evaluate project ideas whose expected value seems particularly hard to assess
- Grantmaker capacity to solicit or generate new project ideas
- Fundamental, macrostrategic, basic, or crucial-considerations-like work that could aid in generating project ideas and evaluating applications
- E.g., it sounds like this would've been relevant to Max Daniel's views on the IIDM working group in the recent round
- To the extent that you're bottlenecked by the number of good applications or would be bottlenecked by that if funded more, is that because (or do you ex
... (read more)Answering these thoroughly would be really tricky, but here are a few off-the-cuff thoughts:
1. Tough to tell. My intuition is 'the same amount as I did' because I was happy with the amount I could grant to each of the recipients I granted to, and I didn't have time to look at more applications than I did. Otoh I could imagine if we the fund had significantly more funding that would seem to provide a stronger mandate for trying things out and taking risks, so maybe that would have inclined me to spend less time evaluating each grant and use some money to do active grant making, or maybe would have inclined me to have funded one or two of the grants that I turned down. I also expect to be less time constrained in future because we won't be doing an entire quarter's grants in one round, and because there will be less 'getting up to speed'.
2. Probably most of these are some bottleneck, and also they interact:
- I had pretty limited capacity this round, and hope to have more in future. Some of that was also to do with not knowing much about some particular space and the plausible interventions in that space, so was a knowledge constraint. Some was to do with finding the most ... (read more)
Re 1: I don't think I would have granted more
Re 2: Mostly "good applicants with good proposals for implementing good project ideas" and "grantmaker capacity to solicit or generate new project ideas", where the main bottleneck on the second of those isn't really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.
Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don't think that low quality applications make my life as a grantmaker much worse; if you're reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate.
Re 4: It varies. Mostly it isn't that the applicant lacks a specific skill.
Re 5: There are a bunch of things that have to align in order for someone to make a good proposal. There has to be a good project idea, and there has to be someone who would be able to make that work, and they have to know about the idea and apply for funding for it, and they need access to whatever other resources they need. Many of these steps can fail. Eg probably there are people who I'd love to fund to do a particular project, ... (read more)
What % of your grants (either grantee- or $-weighted, but preferably specify which denominator you're using) do you expect to be net negative to the world?
A heuristic I have for being less risk-averse is
Obviously this isn't true for everything (eg a world without any existential catastrophes seems like a world that has its priorities right), but I think it's overall a good heuristic, as illustrated by Scott Aaronson's Umeshisms and Mitchell and Webb's "No One Drowned" episode.
My knee-jerk reaction is: If "net negative" means "ex-post counterfactual impact anywhere below zero, but including close-to-zero cases" then it's close to 50% of grantees. Important here is that "impact" means "total impact on the universe as evaluated by some omniscient observer". I think it's much less likely that funded projects are net negative by the light of their own proxy goals or by any criterion we could evaluate in 20 years (assuming no AGI-powered omniscience or similar by then).
(I still think that the total value of the grantee portfolio would be significantly positive b/c I'd expect the absolute values to be systematically higher for positive than for negative grants.)
This is just a general view I have. It's not specific to EA Funds, or the grants this round. It applies to basically any action. That view is somewhat considered but I think also at least somewhat controversial. I have discussed it a bit but not a lot with others, so I wouldn't be very surprised if someone replied to this comment saying "but this can't be right because of X", and then I'd be like "oh ok, I think you're right, the close-to-50% figure now seems massively off to me".
--
If "net negative" mea... (read more)
If I know an organisation is applying to EAIF, and have an inside view that the org is important, how valuable is donating $1000 to the org compared to donating $1000 to EAIF? More generally, how should medium sized but risk-neutral donors coordinate with the fund?
My very off-the-cuff thoughts are:
- If it seems like you are in an especially good position to assess that org, you should give to them directly. This could, e.g., be the case if you happened to know the org's founders especially well, or if you had rare subject-matter expertise relevant to assessing that org.
- If not, you should give to a donor lottery.
- If you win the donor lottery, you would probably benefit from coordinating with EA Funds. Literally giving the donor lottery winnings to EA Funds would be a solid baseline, but I would hope that many people can 'beat' that baseline, especially if they get the most valuable inputs from 1-10 person-hours of fund manager time.
- Generally, I doubt that it's good use of the donor's and fund managers' time if donors and fund managers coordinated on $1,000 donations (except in rare and obvious cases). For a donation of $10,000 some very quick coordination may sometimes be useful - especially if it goes to an early-stage organization. For a $100,000 donation, it starts looking "some coordination is helpful more likely than not" (though in many cases the EA Funds answer may still be "we don't really have anything to say, it seems best if you make
... (read more)Recently I've been thinking about improving the EA-aligned research pipeline, and I'd be interested in the fund managers' thoughts on that. Some specific questions (feel free to just answer one or two, or to say things about the general topic but not these questions):
(No ne... (read more)
Re your 19 interventions, here are my quick takes on all of them
Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.
I am in favor of this. I think one of the biggest bottlenecks here is finding people who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research, eg by mentoring people for an hour or two a week on projects that the mentor finds inside-view interesting (and therefore will be actually bought in to helping with). I think that in situations like this, it's very helpful for the mentor to be judged as Andrew Grov suggests, by the output of their organization + the output of neighboring organizations under their influence. That is, they should think that one of their key goals with their research interns as having the research interns do things that they actually think are useful. I think that not having this goal makes it much more tempting for the mentors to kind of snooze on... (read more)
Re 1: I think that the funds can maybe disburse more money (though I'm a little more bearish on this than Jonas and Max, I think). But I don't feel very excited about increasing the amount of stuff we fund by lowering our bar; as I've said elsewhere on the AMA the limiting factor on a grant to me usually feels more like "is this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make it" than "is this grant good enough to be worth the money".
I think that the funds' RFMF is only slightly real--I think that giving to the EAIF has some counterfactual impact but not very much, and the impact comes from slightly weird places. For example, I personally have access to EA funders who are basically always happy to fund things that I want them to fund. So being an EAIF fund manager doesn't really increase my ability to direct money at promising projects that I run across. (It's helpful to have the grant logistics people from CEA, though, which makes the EAIF grantmaking experience a bit nicer.) The advantages I get from being an EAIF fund manager are that EAIF seeks applications and so I get to make grants I wouldn't have otherwise known about, and ... (read more)
At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I'm worried that (at least in my case) this is too anchored on imagining "business as usual, but with more total capital". I'm wondering if most of the expected value of an additional $100B - especially when controlled by a single donor who can flexibly deploy them - comes from 'crazy' and somewhat unlikely-to-pan-out options. I.e., things like:
(Tbc, I think most of these things would be kind of dumb or impossible as stated, and maybe a "realistic" additional donor wouldn't be open to such things. I'm just gesturing at the rough shape of things which I suspect might contain a lot of the expected value.)
I actually think this is surprisingly non-straightforward. Any estimate of the net present value of total longtermist $$ will have considerable uncertainty because it's a combination of several things, many of which are highly uncertain:
- How much longtermist $$ is there now?
- This is the least uncertain one. It's not super straightforward and requires nonpublic knowledge about the wealth and goals of some large individual donors, but I'd be surprised if my estimate on this was off by 10x.
- What will the financial returns on current longtermist $$ be before they're being spent?
- Over long timescales, for some of that capital, this might be 'only' as volatile as the stock market or some other 'broad' index.
- But for some share of that capital (as well as on shorter time scale) this will be absurdly volatile. Cf. the recent fortunes some EAs have made in crypto.
- How much new longtermist $$ will come in at which times in the future?
- This seems highly uncertain because it's probably very heavy-tailed. E.g., there may well be a single source that increases total capital by 2x or 10x. Naturally, predicting the timing of such a single event will be quite uncertain on a time scale of years or even dec
... (read more)FYI, someone I know is interested in applying to the EAIF, and I told them about this post, and after reading it they replied "btw the Q&A responses at the EAIF were SUPER useful!"
I mention this as one small data point to help the EAIF decide whether it's worth doing such Ask Us Anythings (AUAs?) in future and how much time to spend on them. By extension, it also seems like (even weaker) evidence regarding how useful detailed grant writeups are.
Some related questions with slightly different framings:
Some key uncertainties for me are:
- What products and clusters of ideas work as 'stepping stones' or 'gateways' toward (full-blown) EA [or similarly 'impactful' mindsets]?
- By this I roughly mean: for various products X (e.g., a website providing charity evaluations, or a book, or ...), how does the unconditional probability P(A takes highly valuable EA-ish actions within their next few years) compare to the conditional probability P(A takes highly valuable EA-ish actions within their next few years | A now encounters X)?
- I weakly suspect that me having different views on this than other fund managers was perhaps the largest source of significant disagreements with others.
- It tentatively seems to me that I'm unusually optimistic about the range of products that work as stepping stones in this sense. That is, I worry less if products X are extremely high-quality or accurate in all respects, or agree with typical EA views or motivations in all respects. Instead, I'm more excited about increasing the reach of a wider range of products X that meet a high but different bar of roughly 'taking the goal of effectively improving the world seriously by making a sincere effort to improve on m
... (read more)The Long-Term Future Fund put together a doc on "How does the Long-Term Future Fund choose what grants to make?" How, if at all, is the EAIF's process for choosing what grants to make differ from that? Do you have or plan to make a similar outline of your decision process?
We recently transferred a lot of the 'best practices' that each fund (especially the LTFF) discovered to all the other funds, and as a result, I think it's very similar and there are at most minor differences at this point.
As a different phrasing of Michael's question on forecasting, do EAIF grantmakers have implicit distributions of possible outcomes in their minds when making a grant, either a) in general, or b) for specific grants?
If so, what shape does those distributions (usually) look like? (an example of what I mean is "~log-normal minus a constant" or "90% of the time, ~0, 10% of the time, ~power law")
If not, are your approaches usually more quantitative (eg explicit cost-effectiveness models) or more qualitative/intuitive (eg more heuristic-based and verbal-ar... (read more)
Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.
Have you considered providing small pools of money to people who express potential interest in trying out grantmaking and who you have some reason to believe might be good at it? This could be people the fund manager's already know well, people who narrowly missed out on being appointed as full fund managers, or people who go through a short application process for these small pools specifically.
Potential benefits:
- That could directly increase the diversity of perspectives represented in total in "EA infrastructure" funding decisions
- That could help wi
... (read more)At one point an EA fund manager told me something like, "the infrastructure fund refuses to support anything involving rationality/rationalists as a policy." Did a policy like this exist? Does it still?
In the Animal Welfare Fund AMA, I asked:
... (read more)My take on this (others at the EAIF may disagree and may convince me otherwise):
I think EA Funds should be spending less time on detailed reports, as they're not read by that many people. Also, a main benefit is people improving their thinking based on reading them (it seems helpful for improving one's judgment ability to be able to read very concrete practical decisions and how they were reached), but there are a many such reports already at this point, such that writing further ones doesn't help that much – readers can simply go back to past reports and read those instead. I think EA Funds should produce such detailed reports every 1-2 years (especially when new fund managers come on board, so interested donors can get a sense of their thinking), and otherwise focus more on active grantmaking.
In addition, I think it would make sense for us to publish reports on whichever topic seems most important to us to communicate about – perhaps an intervention report, perhaps an important but underappreciated consideration, or a cause area. I think this should probably happen on an ad-hoc basis.
re 1: I expect to write similarly detailed writeups in future.
re 2: I think that would take a bunch more of my time and not clearly be worth it, so it seems unlikely that I'll do it by default. (Someone could try to pay me at a high rate to write longer grant reports, if they thought that this was much more valuable than I did.)
re 3: I agree with everyone that there are many pros of writing more detailed grant reports (and these pros are a lot of why I am fine with writing grant reports as long as the ones I wrote). By far the biggest con is that it takes more time. The secondary con is that if I wrote more detailed grant reports, I'd have to be a bit clearer about the advantages and disadvantages of the grants we made, and this would involve me having to be clearer about kind of awkward things (like my detailed thoughts on how promising person X is vs person Y); this would be a pain, because I'd have to try hard to write these sentences in inoffensive ways, which is a lot more time consuming and less fun.
re 4: Yes I think this is a good idea, and I tried to do that a little bit in my writeup about Youtubers; I think I might do it more in future.
Speaking for myself, I'm interested in increasing the detail in my write-ups a little over the medium term (perhaps making them typically more the length of the write up for Stefan Schubert). I doubt I'll go all the way to making them as comprehensive as Max's.
Pros:
Cons:
I expect to try to ... (read more)
- What processes do you have for monitoring the outcome/impact of grants?
- Relatedly, do the EAIF fund managers make forecasts about potential outcomes of grants?
- I saw and appreciated that Ben Kuhn made a forecast related to the Giving Green grant.
- I'm interested in whether other fund managers are making such forecasts and just not sharing them in the writeup or are just not making them - both of which are potentially reasonable options.
- And/or do you write down in advance what sort of proxies you'd want to see from this grant after x amount of time?
- E.g.,
... (read more)