Hide table of contents

Hi everyone!

Managers of the EA Infrastructure Fund will be available for an Ask Me Anything session. We'll start answering questions on Friday, June 4th, though some of us will only be able to answer questions the week after. Nevertheless, if you would like to make sure that all fund managers can consider your question, you might want to post it before early UK time on Friday morning. 

What is the EA Infrastructure Fund?

The EAIF is one of the four EA Funds. While the other three Funds support direct work on various causes, this Fund supports work that could multiply the impact of direct work, including projects that provide intellectual infrastructure for the effective altruism community, run events, disseminate information, or fundraise for effective charities. 

Who are the fund managers, and why might you want to ask them questions?

The fund managers are Max Daniel, Michelle Hutchinson, and Buck Shlegeris. In addition, EA Funds Executive Director Jonas Vollmer is temporarily taking on chairperson duties, advising, and voting consultatively on grants. Ben Kuhn was a guest manager in our last grant round. They will all be available for questions, though some may have spotty availability and might post their answers as they have time throughout next week.

One particular reason why you might want to ask us questions is that we are all new in these roles: All fund managers of the EAIF have recently changed, and this was our first grant round. 

What happened in our most recent grant round?

We have made 26 grants totalling about $1.2 million. They include:

  • Two grants totalling $139,200 to Emma Abele, James Aung, Bella Forristal, and Henry Sleight. They will work together to identify and implement new ways to support EA university groups – e.g., through high-quality introductory talks about EA and creating other content for workshops and events. University groups have historically been one of the most important sources of highly engaged EA community members, and we believe there is significant untapped potential for further growth. We are also excited about the team, based significantly on their track record – e.g., James and Bella previously led two of the globally most successful university groups.
  • $41,868 to Zak Ulhaq to develop and implement workshops aimed at helping highly talented teenagers apply EA concepts and quantitative reasoning to their lives. We are excited about this grant because we generally think that educating pre-university audiences about EA-related ideas and concepts could be highly valuable; e.g., we’re aware of (unpublished) survey data indicating that in a large sample of highly engaged community members who learned about EA in the last few years, about ¼ had first heard of EA when they were 18 or younger. At the same time, this space seems underexplored. Projects that are mindful of the risks involved in engaging younger audiences therefore have a high value of information – if successful, they could pave the way for many more projects of this type. We think that Zak is a good fit for efforts in this space because he has a strong technical background and experience with both teaching and EA community building.
  • $5,000 to the Czech Association for Effective Altruism to give away EA-related books to people with strong results in Czech STEM competitions, AI classes, and similar. We believe that this is a highly cost-effective way to engage a high-value audience; long-form content allows for deep understanding of important ideas, and surveys typically find books have helped many people become involved with EA (e.g., in the 2020 EA Survey, more than ⅕ of respondents said a book was important for getting them more involved).
  • $248,300 to Rethink Priorities to allow Rethink to take on nine research interns (7 FTE) across various EA causes, plus support for further EA movement strategy research. We have been impressed with Rethink’s demonstrated ability to successfully grow their team while maintaining a constant stream of high-quality outputs, and think this puts them in a good position to provide growth opportunities for junior researchers. They also have a long history of doing empirical research relevant to movement strategy (e.g., the EA survey), and we are excited about their plans to build upon this track record by running additional surveys illuminating how various audiences think of EA and how responsive they are to EA messaging.

For more detail, see our payout report. It covers all grants from this round and provides more detail on our reasoning behind some of them.

The application deadline for our next grant round will be the 13th of June. After this round is wrapped up, we plan to accept rolling applications.

Ask any questions you like; we'll respond to as many as we can. 

Comments117
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

 A question for the fund managers: When the EAIF funds a project, roughly how should credit should be allocated between the different involved parties, where the involved parties are:

  • The donors to the fund
  • The grantmakers
  • The rest of the EAIF infrastructure (eg Jonas running everything, the CEA ops team handling ops logistics)
  • The grantee

Presumably this differs a lot between grants; I'd be interested in some typical figures.

This question is important because you need a sense of these numbers in order to make decisions about which of these parties you should try to be. Eg if the donors get 90% of the credit, then EtG looks 9x better than if they get 10%.

 

(I'll provide my own answer later.)

Making up some random numbers:

  • The donors to the fund – 8%
  • The grantmakers – 10%
  • The rest of the EAIF infrastructure (eg Jonas running everything, the CEA ops team handling ops logistics) – 7%
  • The grantee – 75%

This is for a typical grant where someone applies to the fund with a reasonably promising project on their own and the EAIF gives them some quick advice and feedback. For a case of strong active grantmaking, I might say something more like 8% / 30% / 12% / 50%.

This is based on the reasoning that we're quite constrained by promising applications and have a lot of funding available.

4
Max_Daniel
I think my off-the-cuff numbers would be roughly similar as Jonas's, but mostly I just feel like I don't know how to think about this. I would probably need to spend 1 to 10 hours reviewing relevant theoretical concepts before being comfortable giving numbers that others might base decisions on.
4
Michelle_Hutchinson
+1
5
Jonas V
Here's another comment that goes into this a bit.

(I'd be very interested in your answer if you have one btw.)

3
Linch
(No need for the EAIF folks to respond; I think I would also find it helpful to get comments from other folks) I'm curious about a set of related questions probing this at a more precise level of granularity. For example, for the  Suppose for the sake of the argument that the RP internship resulted in better career outcomes than the interns counterfactually would have.* For the difference of the impact from the internship vs the next-best-option, what fraction of credit assignment should be allocated to: * The donors to the fund * The grantmakers * The rest of the EAIF infrastructure * RP for selecting interns and therefore providing a signaling mechanism either to the interns themselves or for future jobs * RP for managing/training/aiding interns to hopefully excel * The work of the interns themselves I'm interested in whether the ratio between the first 3 bullet points has changed (for example, maybe with more $s per grant, donor $s are relatively less important and the grantmaker effort/$ ratio is lower) I also interested in the appropriate credit assignment (breaking down all Jonas' 75%!) of the last 3 bullet points. For example, if most people see the value of RP's internship program to the interns as primarily via RP's selection methods, then it might make sense to invest more management/researcher time into designing better pre-internship work trials.  I'm also interested in even more granular takes, but perhaps this is boring to other people. (I work for RP. I do not speak for the org). *(for reasons like it a) speeded up their networking, b) tangible outputs from the RP internship allowed them to counterfactually get jobs where they had more impact, c) it was a faster test for fit and made the interns correctly choose to not go to research, saving time, d) they learned actually valuable skills that made their career trajectory go smoother, etc) 

Would the EAIF be interested in a) post hoc funding of previous salary/other expenses or b) impact certificates that account for risk taken? 


Some context: When I was thinking of running SF/Bay Area EA full-time*, one thing that was fairly annoying for me is that funders (correctly) were uninterested in funding me until there was demonstrated success/impact, or at least decent proxies for such. This intuition was correct, however from my perspective the risk allocation seemed asymmetric.  If I did a poor job, then I eat all the costs. If I did a phenomenally good job, the best I could hope for (from a funding perspective) was a promise of continued funding for the future and maybe back payments for past work.

In the for-profit world, if you disagree with the judgement of funders, press on, and later turn out to be right, you get a greater share of the equity etc.  Nothing equivalent seemed to be true within EA's credit allocation.

It seems like if you disagree with the judgment of funders, the best you can hope to do is break even. Of course, a) I read not being funded as some signal that people didn't think me/my project was sufficiently promising and b) maybe some fun... (read more)

8
Buck
I would personally be pretty down for funding reimbursements for past expenses.
4
Max_Daniel
I haven't thought a ton about the implications of this, but my initial reaction also is to generally be open to this. So if you're reading this and are wondering if it could be worth it to submit an application for funding for past expenses, then I think the answer is we'd at least consider it and so potentially yes. If you're reading this and it really matters to you what the EAIF's policy on this is going forward (e.g., if it's decision-relevant for some project you might start soon), you might want to check with me before going ahead. I'm not sure I'll be able to say anything more definitive, but it's at least possible. And to be clear, so far all that we have are the personal views of two EAIF managers not a considered opinion or policy of all fund managers or the fund as a whole or anything like that.
4
Habryka
I would also be in favor of the LTFF doing this.
2
Linch
That's great to hear! But to be clear, not for risk adjustment? Or are you just not sure on that point? 
2
Buck
I am not sure. I think it’s pretty likely I would want to fund after risk adjustment. I think that if you are considering trying to get funded this way, you should consider reaching out to me first.
6
Jonas V
I'm also in favor of EA Funds doing generous back payments for successful projects. In general, I feel interested in setting up prize programs at EA Funds (though it's not a top priority). One issue is that it's harder to demonstrate to regulators that back payments serve a charitable purpose. However, I'm confident that we can find workarounds for that.

Thanks for doing this AMA!

In the recent payout report, Max Daniel wrote:

My most important uncertainty for many decisions was where the ‘minimum absolute bar’ for any grant should be. I found this somewhat surprising.

Put differently, I can imagine a ‘reasonable’ fund strategy based on which we would have at least a few more grants; and I can imagine a ‘reasonable’ fund strategy based on which we would have made significantly fewer grants this round (perhaps below 5 grants between all fund managers).

This also seems to me like quite an important issue. It seems like reminiscent of Open Phil's idea of making grants "when they seem better than our “last dollar” (more discussion of the “last dollar” concept here), and [saving] the money instead when they don’t". 

Could you (any fund managers, including but not limited to Max) say more about how you currently think about this? Subquestions include:

  • What do you currently see as the "minimum absolute bar"?
  • How important do you think that bar is to your grantmaking decisions?
  • What factors affect your thinking on these questions? How do you approach these questions?

I feel very unsure about this. I don't think my position on this question is very well thought through.

Most of the time, the reason I don't want to make a grant doesn't feel like "this isn't worth the money", it feels like "making this grant would be costly for some other reason". For example, when someone applies for a salary to spend some time researching some question which I don't think they'd be very good at researching, I usually don't want to fund them, but this is mostly because I think it's unhealthy in various ways for EA to fund people to flail around unsuccessfully rather than because I think that if you multiply the probability of the research panning out by the value of the research, you get an expected amount of good that is worse than longtermism's last dollar.

I think this question feels less important to me because of the fact that the grants it affects are marginal anyway. I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make.  And I am able to spend more time on making those best grants go better, by working on active grantmaking or by advising grantees in various ways. And coming up with a more ... (read more)

2
Linch
Am I correct in understanding that this is true for your beliefs about ex ante rather than ex post impact? (in other words, that 1/4 of grants you pre-identified as top-25% will end up accounting for more than 50% of your positive impact)  If so, is this a claim about only the positive impact of the grants you make, or also about the absolute value of all grants you make?  See related question.
2
Buck
This is indeed my belief about ex ante impact. Thanks for the clarification.

Speaking just for myself: I don’t think I could currently define a meaningful ‘minimum absolute bar’. Having said that, the standard most salient to me is often ‘this money could have gone to anti-malaria bednets to save lives’. I think (at least right now) it’s not going to be that useful to think of EAIF as a cohesive whole with a specific bar, let alone explicit criteria for funding. A better model is a cluster of people with different understandings of ways we could be improving the world which are continuously updating, trying to figure out where we think money will do the most good and whether we’ll find better or worse opportunities in the future.

Here are a couple of things pushing me to have a low-ish bar for funding: 

  • I think EA currently has substantially more money than it has had in the past, but hasn’t progressed as fast in figuring out how to turn that into improving the world. That makes me inclined to fund things and see how they go.
  • As a new committee, it seems pretty good to fund some things, make predictions, and see how they pan out. 
  • I’d prefer EA to be growing faster than it currently is, so funding projects now rather than saving the money to try to fi
... (read more)

Some further things pushing me towards lowering my bar:

  • It seems to me that it has proven pretty hard to convert money into EA movement growth and infrastructure improvements. This means that when we do encounter such an opportunity, we should most likely take it, even if it seems expensive or unlikely to succeed.
  • EA has a really large amount of money available (literally billions). Some EAs doing direct work could literally earn >$1,000 per hour if they pursued earning to give, but it's generally agreed that direct work seems more impactful for them. Our common intuitions for spending money don't hold anymore – e.g., a discussion about how to spend $100,000 should probably receive roughly as much time and attention as a discussion about how to spend 2.5 weeks (100 hours) of senior staff time. This means that I don't want to think very long about whether to make a grant. Instead, I want to spend more time thinking about how to help ensure that the project will actually be successful.
  • In cases where a grant might be too weird for a broad range of donors, we can always refer them to a private funder. So I try to think about whether something should be funded or not, and ignore the donor perception issue. At a later point, I can still ask myself 'should this be funded by the EAIF or a large aligned donor?'

Some further things increasing my bar:

  • If we routinely fund mediocre work, there's little real incentive for grantseekers to strive to produce truly outstanding work.

Basically everything Jonas and Michelle have said on this sounds right to me as well.

Maybe a minor difference:

  • I certainly agree that, in general, donor preferences are very important for us to pay attention to.
  • However, I think the "bar" implied by Michelle's "important for all the donations to be at least somewhat explicable to the majority of its donors" is slightly too high.
  • I instead think that it's important that a clear majority of donors endorses our overall decision procedure. [Or, if they don't, then I think we should be aware that we're probably going to lose those donations.] I think this would ideally be compatible with only most donations being somewhat explicable (and a decent fraction, probably a majority, to be more strongly explicable). 
    • Though I would be interested to learn if EAIF donors disagreed with this.
  • (It's a bit unclear how to weigh both donors and grants here. I think the right weights to use in this context are somewhere in between uniform weights across grants/donors and weights propotional to grant/donation size, while being closer to the latter.)
4
Ben_West🔸
I notice that the listed grants seems substantially below $1000/hour; e.g. Rethink getting $250,000 for seven FTEs implies ~$35,000/FTE or roughly $18/hour. * Is this because you aren't getting those senior people applying? Or are there other constraints? * (Maybe this is off by a factor of two if you meant that they are FTE but only for half the year etc.)

I notice that the listed grants seems substantially below $1000/hour; e.g. Rethink getting $250,000 for seven FTEs implies ~$35,000/FTE or roughly $18/hour. *

 

This is two misconceptions:

(1) we are hiring seven interns but they each will only be there for three months. I believe it is 1.8 FTE collectively.

(2) The grant is not being entirely allocated to intern compensation

Interns at Rethink Priorities currently earn $23-25/hr. Researchers hired on a permanent basis earn more than that, currently $63K-85K/yr (prorated for part-time work).

8
Jonas V
The main reason is that the people are willing to work for a substantially lower amount than what they could make when earning to give. E.g., someone who might be able to make $5 million per year in quant trading or tech entrepreneurship might decide to ask for a salary of $80k/y when working at an EA organization. It would seem really weird for that person to ask for a $5 million / year salary, especially given that they'd most likely want to donate most of that anyway.
6
Ben_West🔸
Cool, for what it's worth my experience recruiting for a couple EA organizations is that labor supply is elastic even above (say) $100k/year, and your comments seem to indicate that you would be happy to fund at least some people at that level. So I remain kind of confused why the grant amounts are so small.
5
Jonas V
If you have to pay fairly (i.e., if you pay one employee $200k/y, you have to pay everyone else with a similar skill level a similar amount), the marginal cost of an employee who earns $200k/y can be >$1m/y. That may still be worth it, but less clearly so. FWIW, I also don't really share the experience that labor supply is elastic above $100k/y, at least when taking into account whether staff have a good attitude, fit into the culture of the organization, etc. I'd be keen to hear more about that.
6
Jonas V
Because the EAIF is aiming to grow the overall resources and capacity for improving the world, one model is simply "is the growth rate greater than zero?" Some of the projects we don't fund to me look like they have a negative growth rate (i.e., in expectation, they won't achieve much, and the money and time spent on them will be wasted), and these should obviously not be funded. Beyond that, I don't think it's easy to specify a 'minimum absolute bar'. Furthermore, one straightforward way to increase the EA community's resources is through financial investments, and any EA projects should beat that bar in addition to returning more than they cost. (I don't think this matters much in practice, as we're hoping for growth rates much greater than typical in financial markets.)

What % of grants you almost funded do you expect to be net negative for the world, had they counterfactually been implemented? 

See paired question about grants you actually funded. 

[I'm going to adapt some questions from myself or other people from the recent Long-Term Future Fund and Animal Welfare Fund AMAs.]

  1. How much do you think you would've granted in this recent round if the total funding available to the IF had been ~$5M? ~$10M? ~$20M?
  2. What do you think is your main bottleneck to giving more? Some possibilities that come to mind:
    • Available funding
    • Good applicants with good proposals for implementing good project ideas
      • And to the extent that this is your bottleneck, do yo
    • Grantmaker capacity to evaluate applications
      • Maybe this should capture both whether they have time and whether they have techniques or abilities to evaluate project ideas whose expected value seems particularly hard to assess
    • Grantmaker capacity to solicit or generate new project ideas
    • Fundamental, macrostrategic, basic, or crucial-considerations-like work that could aid in generating project ideas and evaluating applications 
      • E.g., it sounds like this would've been relevant to Max Daniel's views on the IIDM working group in the recent round
  3. To the extent that you're bottlenecked by the number of good applications or would be bottlenecked by that if funded more, is that because (or do you ex
... (read more)

Answering these thoroughly would be really tricky, but here are a few off-the-cuff thoughts: 

1. Tough to tell. My intuition is 'the same amount as I did' because I was happy with the amount I could grant to each of the recipients I granted to, and I didn't have time to look at more applications than I did. Otoh I could imagine if we the fund had significantly more funding that would seem to provide a stronger mandate for trying things out and taking risks, so maybe that would have inclined me to spend less time evaluating each grant and use some money to do active grant making, or maybe would have inclined me to have funded one or two of the grants that I turned down. I also expect to be less time constrained in future because we won't be doing an entire quarter's grants in one round, and because there will be less 'getting up to speed'.

2. Probably most of these are some bottleneck, and also they interact: 
- I had pretty limited capacity this round, and hope to have more in future. Some of that was also to do with not knowing much about some particular space and the plausible interventions in that space, so was a knowledge constraint. Some was to do with finding the most ... (read more)

2
Jonas V
(Just wanted to say that I agree with Michelle.)

Re 1: I don't think I would have granted more

Re 2: Mostly "good applicants with good proposals for implementing good project ideas" and "grantmaker capacity to solicit or generate new project ideas", where the main bottleneck on the second of those isn't really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.

Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don't think that low quality applications make my life as a grantmaker much worse; if you're reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate. 

Re 4: It varies. Mostly it isn't that the applicant lacks a specific skill.

Re 5: There are a bunch of things that have to align in order for someone to make a good proposal. There has to be a good project idea, and there has to be someone who would be able to make that work, and they have to know about the idea and apply for funding for it, and they need access to whatever other resources they need. Many of these steps can fail. Eg probably there are people who I'd love to fund to do a particular project, ... (read more)

8
Max_Daniel
I can't think of any specific grant decision this round for which I think this would have made a difference. Maybe I would have spent more time thinking about how successful grantees might be able to utilize more money than they applied for, and on discussing this with grantees. Overall, I think there might be a "paradoxical" effect that I may have spent less time evaluating grant applications, and therefore would have made fewer grants this round if we had had much more total funding. This is because under this assumption, I would more strongly feel that we should frontload building the capacity to make more, and larger, higher-value grants in the future as opposed to optimizing the decisions on the grant applications we happened to get now. E.g., I might have spent more time on: * Generating leads for, and otherwise helping with, recruiting additional fund managers * Active grantmaking * 'Structural' improvements to the fund - e.g., improving our discussions and voting methods
6
Max_Daniel
On 2, I agree with Buck that the two key bottlenecks - especially if we weight grants by their expected impact - were "Good applicants with good proposals for implementing good project ideas" and "Grantmaker capacity to solicit or generate new project ideas". I think I've had a stronger sense than at least some other fund managers that "Grantmaker capacity to evaluate applications" was also a significant bottleneck, though I would rank it somewhat below the above two, and I think it tends to be a larger bottleneck for grants that are more 'marginal' anyway, which diminishes its impact-weighted importance. I'm still somewhat worried that our lack of capacity (both time and lack of some abilities) could in some cases lead to a "false negative" on a highly impactful grant, especially due to our current way of aggregating opinions between fund managers.
5
Max_Daniel
I think both of these are significant effects. I suspect I might be more worried than others about "good people applying less often than would be ideal", but not sure.
5
Max_Daniel
All of these have happened. I agree with Buck that "applicant lacks a highly specific skill" seems uncommon; I think the cases of "mismatch between the idea and the applicant" are broader/fuzzier. I don't have an immediate sense that any of them is particularly common.
5
Max_Daniel
Re 3, I'm not sure I understand the question and feel a bit confused about how to answer it directly, but I agree with Buck:
5
Max_Daniel
Hmm, I'm not sure I agree with this. Yes, if I had access to a working crystal ball that would have helped - but for realistic versions of 'knowing more about macrostrategy', I can't immediately think of anything that would have helped with evaluating the IIDM grant in particular. (There are other things that would have helped, but I don't think they have to do with macrostrategy, crucial considerations, etc.)
5
MichaelA🔸
This surprises me. Re-reading your writeup, I think my impression was based on the section "What is my perspective on 'improving institutions'?" I'd be interested to hear your take on how I might be misinterpreting that section or misinterpreting this new comment of yours. I'll first quote the section in full, for the sake of other readers: It seems to me like things like "Fundamental, macrostrategic, basic, or crucial-considerations-like work" would be relevant to things like this (not just this specific grant application) in multiple ways: * I think the basic idea of differential technological development or differential progress is relevant here, and the development and dissemination of that idea was essentially macrostrategy research (though of course other versions of the idea predated EA-aligned macrostrategy research) * Further work to develop or disseminate this idea could presumably help evaluate future grant applications like this * This also seems like evidence that other versions of this kind of work could be useful for grantmaking decisions * Same goes for the basic concepts of crucial considerations, disentanglement research, and cluelessness (I personally don't think the latter is useful, but you seem to), which you link to in that report * In these cases, I think there's less useful work to be done further elaborating the concepts (maybe there would be for cluelessness, but currently I think we should instead replace that concept), but the basic terms and concepts seem to have improved our thinking, and macrostrategy-like work may find more such things * It seems it would also be useful and tractable to at least somewhat improve our understanding of which "intermediate goals" would be net positive (and which would be especially positive) and which institutions are more likely to advance or hinder those goals given various changes to their stated goals, decision-making procedures, etc. * Mapping the space of relevant actors and working
4
Max_Daniel
I think the expected value of the IIDM group's future activities, and thus the expected impact of a grant to them, is sensitive to how much relevant fundamental, macrostrategic, etc., kind of work they will have access to in the future. Given the nature of the activities proposed by the IIDM group, I don't think it would have helped me for the grant decision if I had known more about macrostrategy. It would have been different if they had proposed a more specific or "object-level" strategy, e.g., lobbying for a certain policy. I mean it would have helped me somewhat, but I think it pales in importance compared to things like "having more first-hand experience in/with the kind of institutions the group hopes to improve", "more relevant knowledge about institutions, including theoretical frameworks for how to think about them", and "having seen more work by the group's leaders, or otherwise being better able to assess their abilities and potential". [ETA: Maybe it's also useful to add that, on my inside view, free-floating macrostrategy research isn't that useful, certainly not for concrete decisions the IIDM group might face. This also applies to most of the things you suggest, which strike me as 'too high-level' and 'too shallow' to be that helpful, though I think some 'grunt work' like 'mapping out actors' would help a bit, albeit it's not what I typically think of when saying macrostrategy. Neither is 'object-level' work that ignores macrostrategic uncertainty useful.  I think often the only thing that helps is to have people to the object-level work who are both excellent at doing the object-level work and have the kind of opaque "good judgment" that allows them to be appropriately responsive to macrostrategic considerations and reach reflective equilibrium between incentives suggested by proxies and high-level considerations around "how valuable is that proxy anyway?". Unfortunately, such people seem extremely rare. I also think (and here my view probably d
4
MichaelA🔸
Thanks, these are interesting perspectives. --- I think to some extent there's just a miscommunication here, rather than a difference in views. I intended to put a lot of things in the "Fundamental, macrostrategic, basic, or crucial-considerations-like work" - I mainly wanted to draw a distinction between (a) all research "upstream" of grantmaking, and (b) things like Available funding, Good applicants with good proposals for implementing good project ideas, Grantmaker capacity to evaluate applications, and Grantmaker capacity to solicit or generate new project ideas. E.g., I'd include "more relevant knowledge about institutions, including theoretical frameworks for how to think about them" in the bucket I was trying to gesture to. So not just e.g. Bostrom-style macrostrategy work. On reflection, I probably should've also put "intervention research" in there, and added as a sub-question "And do you think one of these types of research would be more useful for your grantmaking than the others?" --- But then your "ETA" part is less focused on macrostrategy specifically, and there I think my current view does differ from yours (making yours interesting + thought-provoking).

I emailed CEA with some questions about the LTFF and EAIF, and Michael Aird (MichaelA on the forum) responded about the EAIF. He said that I could post his email here. Some of the questions overlap with the contents of this AMA (among other things), but I included everything. My questions are formatted as quotes, and the unquoted passages below were written by Michael.

Here are some things I've heard about LTFF and EAIF (please correct any misapprehensions):

You can apply for a grant anytime, and a decision will be made within a few weeks.

Basically correct. Though some decisions take longer, mainly for unusually complicated, risky, and/or large grants, or grants where the applicant decides in response to our questions that they need to revisit their plans and get back to us later. And many decisions are faster. 

The application process is meant to be low-effort, with the application requiring no more than a few hours' work. 

Basically correct, though bear in mind that that doesn't necessarily include the time spent actually doing the planning. We basically just don't want people to spend >2 hours on actually writing the application, but it'll often make sense to spend... (read more)

What % of your grants (either grantee- or $-weighted, but preferably specify which denominator you're using) do you expect to be net negative to the world?

A heuristic I have for being less risk-averse is 

If X (horrible thing) never happens, you spend too much resources on preventing X.

Obviously this isn't true for everything (eg a world without any existential catastrophes seems like a world that has its priorities right), but I think it's overall a good heuristic, as illustrated by Scott Aaronson's Umeshisms and Mitchell and Webb's "No One Drowned" episode. 

My knee-jerk reaction is: If "net negative" means "ex-post counterfactual impact anywhere below zero, but including close-to-zero cases" then it's close to 50% of grantees. Important here is that "impact" means "total impact on the universe as evaluated by some omniscient observer". I think it's much less likely that funded projects are net negative by the light of their own proxy goals or by any criterion we could evaluate in 20 years (assuming no AGI-powered omniscience or similar by then).

(I still think that the total value of the grantee portfolio would be significantly positive b/c I'd expect the absolute values to be systematically higher for positive than for negative grants.)

This is just a general view I have. It's not specific to EA Funds, or the grants this round. It applies to basically any action. That view is somewhat considered but I think also at least somewhat controversial. I have discussed it a bit but not a lot with others, so I wouldn't be very surprised if someone replied to this comment saying "but this can't be right because of X", and then I'd be like "oh ok, I think you're right, the close-to-50% figure now seems massively off to me".

--

If "net negative" mea... (read more)

2
Linch
Thanks a lot for this answer! After asking this, I realize I'm also interested in asking the same question about what ratio of grants you almost funded would be ex post net-negative.
3
Jonas V
This isn't what you asked, but out of all the applications that we receive (excluding desk rejections), 5-20% seem ex ante net-negative to me, in the sense that I expect someone giving funding to them to make the world worse. In general, worries about accidental harm do not play a major role in my decisions not to fund projects, and I don't think we're very risk-averse. Instead, a lot of rejections happen because I don't believe the project will have a major positive impact.
2
Linch
are you including opportunity cost in the consideration of net harm? 
2
Jonas V
I include the opportunity cost of the broader community (e.g., the project hires people from the community who'd otherwise be doing more impactful work), but not the opportunity cost of providing the funding. (This is what I meant to express with "someone giving funding to them", though I think it wasn't quite clear.)
2
Max_Daniel
As an aside, I think that's an excellent heuristic, and I worry that many EAs (including myself) haven't internalized it enough. (Though I also worry that pushing too much for it could lead to people failing to notice the exceptions where it doesn't apply.)
2
MichaelA🔸
[thinking/rambling aloud] I feel like an "ideal reasoner" or something should indeed have that heuristic, but I feel unsure whether boundedly rational people internalising it more or having it advocated for to them more would be net positive or net negative. (I feel close to 50/50 on this and haven't thought about it much; "unsure" doesn't mean "I suspect it'd probably be bad.)  I think this intersects with concerns about naive consequentialism and (less so) potential downsides of using explicit probabilities.  If I had to choose whether to make most of the world closer to naive consequentialism than they are now, and I can't instead choose sophisticated consequentialism, I'd probably do that. But I'm not sure for EA grantmakers. And of course sophisticated consequentialism seems better. Maybe there's a way we could pair this heuristic with some other heuristics or counter-examples such that the full package is quite useful. Or maybe adding more of this heuristic would already  help "balance things out", since grantmakers may already be focusing somewhat too much on downside risk. I really don't know.
9
Linch
Hmm, I think this heuristic actually doesn't make sense for ideal (Bayesian) reasoners, since ideal reasoners can just multiply the EVs out for all actions and don't need weird approximations/heuristics.  I broadly think this heuristic makes sense in a loose way in situations where the downside risks are not disproportionately high. I'm not sure what you mean by "sophisticated consequentialism" here, but I guess I'd sort of expect sophisticated consequentialism (at least in situations where explicit EV calculations are less practical) to include a variant of this heuristic somewhere.
3
MichaelA🔸
I now think sophisticated consequentialism may not be what I really had in mind. Here's the text from the entry on naive consequentialism I linked to: I think maybe what I have in mind is actually "consequentialism that accounts appropriately for biases, model uncertainty, optimizer's curse, unilateralist's curse, etc." (This seems like a natural fit for the words sophisticated consequentialism, but it sounds like that's not what the term is meant to mean.)  I'd be much more comfortable with someone having your heuristic if they were aware of those reasons why your EV estimates (whether implicit or explicit, qualitative or quantitative) should often be quite uncertain and may be systematically biased towards too much optimism for whatever choice you're most excited about. (That's not the same as saying EV estimates are useless, just that they should often be adjusted in light of such considerations.)

If I know an organisation is applying to EAIF, and have an inside view that the org is important, how valuable is donating $1000 to the org compared to donating $1000 to EAIF? More generally, how should medium sized but risk-neutral donors coordinate with the fund?

My very off-the-cuff thoughts are:

  • If it seems like you are in an especially good position to assess that org, you should give to them directly. This could, e.g., be the case if you happened to know the org's founders especially well, or if you had rare subject-matter expertise relevant to assessing that org.
  • If not, you should give to a donor lottery.
  • If you win the donor lottery, you would probably benefit from coordinating with EA Funds. Literally giving the donor lottery winnings to EA Funds would be a solid baseline, but I would hope that many people can 'beat' that baseline, especially if they get the most valuable inputs from 1-10 person-hours of fund manager time.
  • Generally, I doubt that it's good use of the donor's and fund managers' time if donors and fund managers coordinated on $1,000 donations (except in rare and obvious cases). For a donation of $10,000 some very quick coordination may sometimes be useful - especially if it goes to an early-stage organization. For a $100,000 donation, it starts looking "some coordination is helpful more likely than not" (though in many cases the EA Funds answer may still be "we don't really have anything to say, it seems best if you make
... (read more)

Recently I've been thinking about improving the EA-aligned research pipeline, and I'd be interested in the fund managers' thoughts on that. Some specific questions (feel free to just answer one or two, or to say things about the general topic but not these questions):

  1. In What's wrong with the EA-aligned research pipeline?, I "briefly highlight[ed] some things that I (and I think many others) have observed or believe, which I think collectively demonstrate that the current processes by which new EA-aligned research and researchers are “produced” are at least somewhat insufficient, inefficient, and prone to error." Do those observations or beliefs ring true to you? Would you diagnose the "problem(s)" differently?
  2. More recently, I "briefly discuss[ed] 19 interventions that might improve [this] situation. I discuss[ed] them in very roughly descending order of how important, tractable, and neglected I think each intervention is, solely from the perspective of improving the EA-aligned research pipeline." Do you think any of those ideas seem especially great or terrible? Would you rank ordering be different to mine? 
  3. Do you think there are promising intervention options I omitted?

(No ne... (read more)

Re your 19 interventions, here are my quick takes on all of them

Creating, scaling, and/or improving EA-aligned research orgs

Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.

Creating, scaling, and/or improving EA-aligned research training programs

I am in favor of this. I think one of the biggest bottlenecks here is finding people  who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research, eg by mentoring people for an hour or two a week on projects that the mentor finds inside-view interesting (and therefore will be actually bought in to helping with). I think that in situations like this, it's very helpful for the mentor to be judged as Andrew Grov suggests, by the output of their organization + the output of neighboring organizations under their influence. That is, they should think that one of their key goals with their research interns as having the research interns do things that they actually think are useful. I think that not having this goal makes it much more tempting for the mentors to kind of snooze on... (read more)

9
Max_Daniel
I would be enthusiastic about this. If you don't do it, I might try doing this myself at some point. I would guess the main challenge is to get sufficient inter-rater reliability; i.e., if different interviewers used this interview to interview the same person (or if different raters watched the same recorded interview), how similar would their ratings be?  I.e., I'm worried that the bottleneck might be something like "there are only very few people who are good at assessing other people" as opposed to "people typically use the wrong method to try to assess people".
5
MichaelA🔸
(FWIW, at first glance, I'd also be enthusiastic about one of you trying this.)
4
Linch
Sorry, minor confusion about this. By "top 25%," do you mean 75th percentile? Or are you encompassing the full range here?
3
MichaelA🔸
I'm pretty surprised by the strength of that reaction. Some followups: 1. How do you square that with the EA Funds (a) funding things that would increase the amount/quality/impact of EA-aligned research(ers), and (b) indicating in some places (e.g. here) the funds have room for more funding? * Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)? * Do you disagree that the funds have room for more funding? 2. Do you think increasing available funding wouldn't help with any EA stuff, or do you just mean for increasing the amount/quality/impact of EA-aligned research(ers)? 3. Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)? 

Re 1: I think that the funds can maybe disburse more money (though I'm a little more bearish on this than Jonas and Max, I think). But I don't feel very excited about increasing the amount of stuff we fund by lowering our bar; as I've said elsewhere on the AMA the limiting factor on a grant to me usually feels more like "is this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make it" than "is this grant good enough to be worth the money".

I think that the funds' RFMF is only slightly real--I think that giving to the EAIF has some counterfactual impact but not very much, and the impact comes from slightly weird places. For example, I personally have access to EA funders who are basically always happy to fund things that I want them to fund. So being an EAIF fund manager doesn't really increase my ability to direct money at promising projects that I run across. (It's helpful to have the grant logistics people from CEA, though, which makes the EAIF grantmaking experience a bit nicer.) The advantages I get from being an EAIF fund manager are that EAIF seeks applications and so I get to make grants I wouldn't have otherwise known about, and ... (read more)

I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.

At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I'm worried that (at least in my case) this is too anchored on imagining "business as usual, but with more total capital". I'm wondering if most of the expected value of an additional $100B - especially when controlled by a single donor who can flexibly deploy them - comes from 'crazy' and somewhat unlikely-to-pan-out options. I.e., things like:

  • Building an "EA city" somewhere
  • Buying a majority of shares of some AI company (or of relevant hardware companies)
  • Being able to spend tens of billions of $ on compute, at a time when few other actors are willing to do so
  • Buying the New York Times
  • Being among the first actors settling Mars

(Tbc, I think most of these things would be kind of dumb or impossible as stated, and maybe a "realistic" additional donor wouldn't be open to such things. I'm just gesturing at the rough shape of things which I suspect might contain a lot of the expected value.)

4
Buck
I think that "business as usual but with more total capital" leads to way less increased impact than 20%; I am taking into account the fact that we'd need to do crazy new types of spending. Incidentally, you can't buy the New York Times on public markets; you'd have to do a private deal with the family who runs it .
2
Max_Daniel
Hmm. Then I'm not sure I agree. When I think of prototypical example scenarios of "business as usual but with more total capital" I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based 'utility function' I'd be surprised if it had returns than diminish much more strongly than logarithmic. (That's at least my initial intuition - not sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%. (I guess there is also the question what exactly we're assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then I'm much more inclined to agree with "business as usual + this extra capital adds much less than 20%". In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)
2
Max_Daniel
OK, on a second thought I think this argument doesn't work because it's basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, 'crazy' opportunities become available.
7
Jonas V
Here's a toy model: * A production function roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly) * A default assumption that longtermism will eventually end up with $30-$300B in funding, let's assume $100B Increasing the funding from $100B to $200B would then increase utility by 15%.
9
Jonas V
Just wanted to flag briefly that I personally disagree with this: * I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Phil's last dollar), and are truly increasing overall resources*. I think that there's a high chance that more financial resources won't be helpful at all, but some small chance that they will be, so the EV is still weakly positive. * I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*. * Some models/calculations that I've seen don't do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations / Shapley value (the fundraising organization often doesn't deserve 100% of the credit for the money raised – some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments). I still somewhat share Buck's overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization and writing a thesis paper about donation behavior. I'd rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level. Overall, I want to continue funding good fundraising organizations.
9
Linch
I'm curious how much $s you and others think that longtermist EA has access to right now/will have access to in the near future. The 20% number seems like a noticeably weaker claim if longtermist EA currently has access to 100B than if we currently have access to 100M.

I actually think this is surprisingly non-straightforward. Any estimate of the net present value of total longtermist $$ will have considerable uncertainty because it's a combination of several things, many of which are highly uncertain:

  • How much longtermist $$ is there now?
    • This is the least uncertain one. It's not super straightforward and requires nonpublic knowledge about the wealth and goals of some large individual donors, but I'd be surprised if my estimate on this was off by 10x.
  • What will the financial returns on current longtermist $$ be before they're being spent?
    • Over long timescales, for some of that capital, this might be 'only' as volatile as the stock market or some other 'broad' index.
    • But for some share of that capital (as well as on shorter time scale) this will be absurdly volatile. Cf. the recent fortunes some EAs have made in crypto.
  • How much new longtermist $$ will come in at which times in the future?
    • This seems highly uncertain because it's probably very heavy-tailed. E.g., there may well be a single source that increases total capital by 2x or 10x. Naturally, predicting the timing of such a single event will be quite uncertain on a time scale of years or even dec
... (read more)
4
MichaelA🔸
Interesting, thanks. Shouldn't your lower bound for the 50% interval be higher than for the 80% interval? Or is the second interval based on different assumptions, e.g. including/ruling out some AI stuff? (Not sure this is an important question, given how much uncertainty there is in these numbers anyway.)
8
Max_Daniel
If the intervals were centered - i.e., spanning the 10th to 90th and the 25th to 75th percentile, respectively - then it should be, yes. I could now claim that I wasn't giving centered intervals, but I think what is really going on is that my estimates are not diachronically consistent even if I make them within 1 minute of each other.
2
Max_Daniel
I also now think that the lower end of the 80% interval should probably be more like $5-15B.
6
Max_Daniel
I think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective. However, I suspect I'm (perhaps significantly) more optimistic than you about 'indirect' effects from promoting good content and advice on effective giving, promoting it as a 'social norm', etc. This is roughly because of the view I state under the first key uncertainty here, i.e., I suspect that encountering effective giving can for some people be a 'gateway' toward more impactful behaviors. One issue is that I think the sign and absolute value of these indirect effects are not that well correlated with the proxy goals such organizations would optimize, e.g., amount of money raised. For example, I'd guess it's much better for these indirect effects if the org is also impressive intellectually or entrepreneurially; if it produces "evangelists" rather than just people who'll start giving 1% as a 'hobby', are quiet about it, and otherwise don't think much about it; if it engages in higher-bandwidth interactions with some of its audience; and if, in communications it at least sometimes mentions other potentially impactful behaviors. So, e.g., GiveWell by these lights looks much better than REG, which in turns looks much better than, say, buying Facebook ads for AMF. (I'm also quite uncertain about all of this. E.g., I wouldn't be shocked if after significant additional consideration I ended up thinking that the indirect effects of promoting effective giving - even in a 'good' way - were significantly net negative.)
3
Jonas V
When I said that the EAIF and LTFF have room for more funding, I didn't mean to say "EA research is funding-constrained" but "I think some of the abundant EA research funding should be allocated here."  Saying "this particular pot has room for more funding" can be fully consistent with the overall ecosystem being saturated with funding. I think it definitely helps a lot with neartermist interventions. I also think it still makes a substantial* difference in longtermism, including research – but the difference you can make through direct work is plausibly vastly greater (>10x greater). * Substantial in the sense "if you calculate the expected impact, it'll be huge", not "substantial relative to the EA community's total impact."
4
MichaelA🔸
Ah, good point. So is your independent impression that the very large donors (e.g., Open Phil) are making a mistake by not multiplying the total funding allocated to EAIF and LTFF by (say) a factor of 0.5-5? (I don't think that that is a logically necessary consequence of what you said, but seems like it could be a consequence of what you said + some plausible other premises. I ask about the very large donors specifically because things you've said elsewhere already indicate you think smaller donors are indeed often making a mistake by not allocating more funding to EAIF and LTFF. But maybe I'm wrong about that.)
7
Jonas V
I don't think anyone has made any mistakes so far, but they would (in my view) be making a mistake if they didn't allocate more funding this year. Edit: Hmm, why do you think this? I don't remember having said that.
5
MichaelA🔸
Actually I now think I was just wrong about that, sorry. I had been going off of vague memories, but when I checked your post history now to try to work out what I was remembering, I realised it may have been my memory playing weird tricks based on your donor lottery post, which actually made almost the opposite claim. Specifically, you say "For this reason, we believe that a donor lottery is the most effective way for most smaller donors to give the majority of their donations, for those who feel comfortable with it."  (Which implies you think that that's a more effective way for most smaller donors to give than giving to the EA Funds right away - rather than after winning a lottery and maybe ultimately deciding to give to the EA Funds.) I think I may have been kind-of remembering what David Moss said as if it was your view, which is weird, since David was pushing against what you said.  I've now struck out that part of my comment. 
2
MichaelA🔸
FWIW, I agree that your concerns about "Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers" are well worth bearing in mind and that they make at least some versions of this intervention much less valuable or even net negative.
2
MichaelA🔸
I think I agree with this, though part of the aim for the database would be to help people find mentors (or people/resources that fill similar roles). But this wasn't described in the title of that section, and will be described in the post coming out in a few weeks, so I'll leave this topic there :)
2
MichaelA🔸
Thanks for this detailed response! Lots of useful food for thought here, and I agree with much of what you say. Regarding Effective Thesis: * I think I agree that "most research areas relevant to longtermism require high context in order to contribute to", at least given our current question lists and support options.  * I also think this is the main reason I'm currently useful as a researcher despite (a) having little formal background in the areas I work in and (b) there being a bunch of non-longtermist specialists who already work in roughly those areas. * On the other hand, it seems like we should be able to identify many crisp, useful questions that are relatively easy to delegate to people - particularly specialists - with less context, especially if accompanied with suggested resources, a mentor with more context, etc.  * E.g., there are presumably specific technical-ish questions related to pathogens, antivirals, climate modelling, or international relations that could be delegated to people with good subject area knowledge but less longtermist context. * I think in theory Effective Thesis or things like it could contribute to that * After writing that, I saw you said the following, so I think we mostly agree here: "I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually can't think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. I'm excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff." * OTOH, in terms of examples of this happening, I think at least Luke Muehlhauser seems to believe some of this has happened for Open Phil's AI governance grantmaking (though I haven't looked into the details myself), based

FYI, someone I know is interested in applying to the EAIF, and I told them about this post, and after reading it they replied "btw the Q&A responses at the EAIF were SUPER useful!"

I mention this as one small data point to help the EAIF decide whether it's worth doing such Ask Us Anythings (AUAs?) in future and how much time to spend on them. By extension, it also seems like (even weaker) evidence regarding how useful detailed grant writeups are.

4
Jonas V
Thanks, this is useful!

Some related questions with slightly different framings: 

  1. What crucial considerations and/or key uncertainties do you think the EAIF fund operates under?
  2. What types/lines of research do you expect would be particularly useful for informing the EAIF's funding decisions?
  3. Do you have thoughts on what types/lines of research would be particularly useful for informing other funders'  funding decisions in the "EA infrastructure" space?
  4. Do you have thoughts on how the answers to questions 2 and 3 might differ?

Some key uncertainties for me are: 

  • What products and clusters of ideas work as 'stepping stones' or 'gateways' toward (full-blown) EA [or similarly 'impactful' mindsets]?
    • By this I roughly mean: for various products X (e.g., a website providing charity evaluations, or a book, or ...), how does the unconditional probability P(A takes highly valuable EA-ish actions within their next few years) compare to the conditional probability P(A takes highly valuable EA-ish actions within their next few years | A now encounters X)?
    • I weakly suspect that me having different views on this than other fund managers was perhaps the largest source of significant disagreements with others.
    • It tentatively seems to me that I'm unusually optimistic about the range of products that work as stepping stones in this sense. That is, I worry less if products X are extremely high-quality or accurate in all respects, or agree with typical EA views or motivations in all respects. Instead, I'm more excited about increasing the reach of a wider range of products X that meet a high but different bar of roughly 'taking the goal of effectively improving the world seriously by making a sincere effort to improve on m
... (read more)
4
MichaelA🔸
Your points about "How can we structure the EA community in such a way that it can 'absorb' very large numbers of people while also improving the allocation of talent or other resources?" are perhaps particularly thought-provoking for me. I think I find your points less convincing/substantive than you do, but I hadn't thought about them before and I think they do warrant more thought/discussion/research. On this, readers may find the value of movement growth entry/tag interesting. (I've also made a suggestion on the Discussion page for a future editor to try to incorporate parts from your comment into that entry.) Here are some quick gestures at the reasons why I think I'm less convinced by your points than you. But I don't actually know my overall stance on how quickly, how large, and simply how the EA movement should grow. And I expect you've considered things like this already - this is maybe more for the readers benefit, or something. * As you say, "perhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too 'preparadigmatic' to allow for something like that." * I think we currently have a quite remarkable degree of trust, helpfulness, and coordination. I think that that becomes harder as a movement grows, and particularly if it grows in certain ways * E.g., if we threw 2000 additional randomly chosen people into an EA conference, it would no longer make sense for me to spend lots of time having indiscriminate 1-1 chats where I give career advice (currently I spend a fair amount of time doing things like this, which reduces how much time I have for other useful things). I'd either have to stop doing that or find some way of "screening" people for it, which could impose costs and awkwardness on both parties * Currently we have t
2
Max_Daniel
I mean I'm not sure how convinced I am by my points either. :) I think I mainly have a reaction of "some discussions I've seen seem kind of off, rely on flawed assumptions or false dichotomies, etc." - but even if that's right, I feel way less sure what the best conclusion is. One quick reply: I think the "particularly if it grows in certain ways" is the key part here, and that basically we should talk 90% about how to grow and 10% about how much to grow. I think one of my complaints is precisely that some discussions seem to construe suggestions of growing faster, or aiming for a larger community, as implying "adding 2,000 random people to EAG". But to me this seems to be a bizarre strawman. If you add 2,000 random people to a maths conference, or drop them into a maths lecture, it will be a disaster as well! I think the key question is not "what if we make everything we have bigger?" but "can we build a structure that allows separation between, and controlled flow of talent and other resources, different subcommunities?". A somewhat grandiose analogy: Suppose that at the dawn of the agricultural revolution you're a central planner tasked with maximizing the human population. You realize that by introducing agriculture, much larger populations could be supported as far as the food supply goes. But then you realize that if you imagine larger population densities and group sizes while leaving everything else fixed, various things will break - e.g., kinship-based conflict resolution mechanisms will become infeasible. What should you do? You shouldn't conclude that, unfortunately, the population can't grow. You should think about division of labor, institutions, laws, taxes, cities, and the state.
4
MichaelA🔸
(Yeah, this seems reasonable.  FWIW, I used "if we threw 2000 additional randomly chosen people into an EA conference" as an example precisely because it's particularly easy to explain/see the issue in that case. I agree that many other cases wouldn't just be clearly problematic, and thus I avoided them when wanting a quick example. And I can now see how that example therefore seems straw-man-y.)
2
Greg_Colbourn
Interesting discussion. What if there was a separate brand for a mass movement version of EA?
4
MichaelA🔸
Thanks! This is really interesting. Minor point: I think it may have been slightly better to make a separate comment for each of your top-level bullet points, since they are each fairly distinct, fairly substantive, and could warrant specific replies.
2
MichaelA🔸
[The following comment is a tangent/nit-pick, and doesn't detract from your actual overall point.] I agree that that sort of content seems useful, and also that "for most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now)". But I think the "even though" doesn't quite make sense: I think part of the target audience for at least the Tomasik article was probably also people who might use their donations or careers to reduce animal suffering. And that's more plausibly the best way for them to help farmed animals now, and such people would also benefit from analyses of the contributions of different animal products to animal suffering. (But I'd guess that that would be less true for Galef's article, due to having a less targeted audience. That said, I haven't actually read either of these specific articles.)
4
Max_Daniel
(Ah yeah, good point. I agree that the "even though" is a bit off because of the things you say.)
2
MichaelA🔸
In case any readers are interested, they can see my thoughts on that piece here: Quick thoughts on Kelsey Piper's article "Is climate change an “existential threat” — or just a catastrophic one?" Personally, I currently feel unsure whether it'd be very positive, somewhat positive, neutral, or somewhat negative for people to be exposed to that piece or pieces like it. But I think this just pushes in favour of your overall point that "What products and cluster of ideas work as 'stepping stones' or 'gateways' toward (full-blown) EA [or similarly 'impactful' mindsets]?" is a key uncertainty and that more clarity on that would be useful. (I should also note than in general I think Kelsey's work has remarkably good quality, especially considering the pace she's producing things at, and I'm very glad she's doing the work she's doing.)
7
Michelle_Hutchinson
Here are a few things:  * What proportion of the general population might fully buy in to EA principles if they came across them in the right way, and what proportion of people might buy in to some limited version (eg become happy to donate to evidence backed global poverty interventions)? I’ve been pretty surprised how much traction ‘EA’ as an overall concept has gotten. Whereas I’ve maybe been negatively surprised by some limited version of EA not getting more traction than it has. These questions would influence how excited I am about wide outreach, and about how much I think it should be optimising for transmitting a large number of ideas vs simply giving people an easy way to donate to great global development charities. * How much and in which cases research is translated into action. I have some hypothesis that it’s often pretty hard to translate research into action. Even in cases where someone is deliberating between actions and someone else in another corner of the community is researching a relevant consideration, I think it’s difficult to bring these together. I think maybe that inclines me towards funding more ‘getting things done’ and less research than I might naturally be tempted to. (Though I’m probably pretty far on the ‘do more research’ side to start with.) It also inclines me to fund things that might seem like good candidates for translating research into action. * How useful influencing academia is. On the one hand, there are a huge number of smart people in academia, who would like to spend their careers finding out the truth. Influencing them towards prioritising research based on impact seems like it could be really fruitful. On the other hand, it’s really hard to make it in academia, and there are strong incentives in place there, which don’t point towards impact. So maybe it would be more impactful for us to encourage people who want to do impactful work to leave academia and be able to focus their research purely on impact. Currently
2
MichaelA🔸
Interesting, thanks. (And all the other answers here have been really interesting too!) Is what you have in mind the sort of thing the "awareness-inclination model" in How valuable is movement growth? was aiming to get at? Like further theorising and (especially?) empirical research along the lines of that model, making breaking things down further into particular bundles of EA ideas, particular populations, particular ways of introducing the ideas, etc.?

The Long-Term Future Fund put together a doc on "How does the Long-Term Future Fund choose what grants to make?" How, if at all, is the EAIF's process for choosing what grants to make differ from that? Do you have or plan to make a similar outline of your decision process? 

We recently transferred a lot of the 'best practices' that each fund (especially the LTFF) discovered to all the other funds, and as a result, I think it's very similar and there are at most minor differences at this point.

5
Neel Nanda
What were the most important practices you transferred?
  • Having an application form that asks some more detailed questions (e.g., path to impact of the project, CV/resume, names of the people involved with the organization applying, confidential information)
  • Having a primary investigator for each grant (who gets additional input from 1-3 other fund managers), rather than having everyone review all grants
  • Using score voting with a threshold (rather than ordering grants by expected impact, then spending however much money we have)
  • Explicitly considering giving applicants more money than they applied for
  • Offering feedback to applicants under certain conditions (if we feel like we have particularly useful thoughts to share with them, or they received an unusually high score in our internal voting)
  • Asking for references in the first stage of the application form, but without requiring applicants to clear them ahead of time (so it's low-effort for them, but we already know who the references would be)
  • Having an automatically generated google doc for each application that contains all the information related to a particular grant (original application, evaluation, internal discussion, references, applicant emails, etc.)
  • Writing in-depth payout reports to build trust and help improve community epistemics; write shorter, lower-effort payout reports once that's done and we want to save time
2
BrianTan
I think you meant EAIF, not AWF :)
2
MichaelA🔸
(Ah yes, thanks, fixed. This was a casualty of copy-pasting a bunch of questions over from other AMAs.)

As a different phrasing of Michael's question on forecasting, do EAIF grantmakers have implicit distributions of possible outcomes in their minds when making a grant, either a) in general, or b) for specific grants? 

If so, what shape does those distributions (usually) look like? (an example of what I mean is "~log-normal minus a constant" or "90% of the time, ~0, 10% of the time, ~power law")

If not, are your approaches usually more quantitative (eg explicit cost-effectiveness models) or more qualitative/intuitive (eg more heuristic-based and verbal-ar... (read more)

4
Max_Daniel
I think I often have an implicit intuition about something like "how heavy-tailed is this grant?". But I also think most grants I'm excited about are either at least somewhat heavy-tailed or aimed at generating information for a decision about a (potentially heavy-tailed) future grant, so this selection effect will reduce differences between grants along that dimension. But I think for less than 1/10 of the grants I think about I will have any explicit quantitative specification of the distribution in mind. (And if I have it will be rougher than a full distribution, e.g. a single "x% of no impact" intuition.) Generally I think our approaches are more often qualitative/intuitive than quantitative. There are rare exceptions, e.g. for the children's book grant I made a crappy cost-effectiveness back-of-the-envelope calculation just to check if the grant seemed like a non-starter based on this. As far as I remember, that was the only such case this round. Sometimes we will discuss specific quantitative figures, e.g., the amount of donations a fundraising org might raise within a year. But our approach for determining these figures will then in turn usually be qualitative/intuitive rather than based on a full-blown quantitative model.

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

Have you considered providing small pools of money to people who express potential interest in trying out grantmaking and who you have some reason to believe might be good at it? This could be people the fund manager's already know well, people who narrowly missed out on being appointed as full fund managers, or people who go through a short application process for these small pools specifically. 

Potential benefits:

  • That could directly increase the diversity of perspectives represented in total in "EA infrastructure" funding decisions
  • That could help wi
... (read more)
9
Jonas V
I have a pretty strong view that I don't fully trust any single person's judgment (including my own), and that aggregating judgments (through discussion and voting) has been super helpful for the EAIF's, Animal Welfare Fund's (AWF's), and especially the Long-Term Future Fund's (LTFF's) overall judgment ability in the past. E.g., I can recall a bunch of (in my view) net-negative grants that didn't end up being made thanks to this sort of aggregation, and also some that did end up happening – where it ultimately turned out that I was wrong. I have also heard through the grapevine that previous experiments in this direction didn't go very well (mostly in that the 'potential benefits' you listed didn't really materialize; I don't think anything bad happened). Edit: I don't give a lot of weight to this though; I think perhaps there's a model that works better than what has been tried in the past. I also think that having more discussion between grantmakers seems useful for improving judgment over the longer term. I think the LTFF partly has good judgment because it has discussed a lot of disagreements that generalize to other cases, has exchanged a lot of models/gears, etc. For this reason, I'm fairly skeptical of any approach that gives a single person full discretion over some funding, and would prefer a process with more engagement with a broader range of opinions of other grantmakers. (Edit: Though others disagree somewhat, and will hopefully share their views as well.) Our current solution is to appoint guest managers instead, as elaborated on here: https://forum.effectivealtruism.org/posts/ek5ZctFxwh4QFigN7/ea-funds-has-appointed-new-fund-managers Appointing guest managers takes quite a lot of time, so I'm not sure how many we will have in the future. Another idea that I think would be interesting is to implement your suggestion with teams of potential grantmakers (rather than individuals), like the Oxford Prioritisation Project. Again it would take some capa
8
Buck
I don't think this has much of an advantage over other related things that I do, like * telling people that they should definitely tell me if they know about projects that they think I should fund, and asking them why * asking people for their thoughts on grant applications that I've been given * asking people for ideas for active grantmaking strategies

At one point an EA fund manager told me something like, "the infrastructure fund refuses to support anything involving rationality/rationalists as a policy." Did a policy like this exist? Does it still?

9
Buck
Like Max, I don't know about such a policy. I'd be very excited to fund promising projects to support the rationality community, eg funding local LessWrong/Astral Codex Ten groups.
7
Max_Daniel
I'm not aware of any such policy, which means that functionally it didn't exist for this round. I don't know about what policies may have existed before I joined the EAIF, and generally don't have much information about how previous fund managers made decisions. FWIW, I find it hard to believe that there was a policy like the one you suggest, at least for broad construals of 'anything involving'. For instance, I would guess that some staff members working for organizations that were funded by the EAIF in previous rounds might identify at rationalists, and so if this counted as "something involving rationalists" previous grants would be inconsistent with that policy. It sounds more plausible to me that perhaps previous EAIF managers agreed not to fund projects that primarily aim to build the rationality community or promote standard rationality content and don't have a direct connection to the EA community or EA goals. (But again, I don't know if that was the case.) Speaking personally, and as is evident from some grants we made this round (e.g. this one), I'm generally fairly open to funding things that don't have an "EA" branding and that contribute to "improving the work of projects that use the principles of effective altruism" (cf. official fund scope) in a rather indirect way. (See also some related thoughts in a different AMA answer.) Standard rationality/LessWrong content is not among the non-EA-branded things I'm generally most excited to promote, but I would still consider applications to that effect on a case-by-case basis rather than deciding based on a blanket policy. In addition, other fund managers might be more generically positive about promoting rationality content or building the rationality community than I am.

In the Animal Welfare Fund AMA, I asked: 

Have you considered sometimes producing longer write-ups that somewhat extensively detail the arguments you saw for and against giving to a particular funding opportunity? (Perhaps just for larger grants.)

This could provide an additional dose of the kind of benefits already provided by the current payout reports, as well as some of the benefits that having an additional animal welfare charity evaluator would provide. (Obviously there's already ACE in this space, but these write-ups could focus on funding opport

... (read more)

My take on this (others at the EAIF may disagree and may convince me otherwise):

I think EA Funds should be spending less time on detailed reports, as they're not read by that many people. Also, a main benefit is people improving their thinking based on reading them (it seems helpful for improving one's judgment ability to be able to read very concrete practical decisions and how they were reached), but there are a many such reports already at this point, such that writing further ones doesn't help that much – readers can simply go back to past reports and read those instead. I think EA Funds should produce such detailed reports every 1-2 years (especially when new fund managers come on board, so interested donors can get a sense of their thinking), and otherwise focus more on active grantmaking.

In addition, I think it would make sense for us to publish reports on whichever topic seems most important to us to communicate about – perhaps an intervention report, perhaps an important but underappreciated consideration, or a cause area. I think this should probably happen on an ad-hoc basis.

2
Max_Daniel
While I produced a number of detailed reports for this round, I agree with this.

re 1: I expect to write similarly detailed writeups in future.

re 2: I think that would take a bunch more of my time and not clearly be worth it, so it seems unlikely that I'll do it by default. (Someone could try to pay me at a high rate to write longer grant reports, if they thought that this was much more valuable than I did.)

re 3: I agree with everyone that there are many pros of writing more detailed grant reports (and these pros are a lot of why I am fine with writing grant reports as long as the ones I wrote). By far the biggest con is that it takes more time. The secondary con is that if I wrote more detailed grant reports, I'd have to be a bit clearer about the advantages and disadvantages of the grants we made, and this would involve me having to be clearer about kind of awkward things (like my detailed thoughts on how promising person X is vs person Y); this would be a pain, because I'd have to try hard to write these sentences in inoffensive ways, which is a lot more time consuming and less fun.

re 4: Yes I think this is a good idea, and I tried to do that a little bit in my writeup about Youtubers; I think I might do it more in future.

Speaking for myself, I'm interested in increasing the detail in my write-ups a little over the medium term (perhaps making them typically more the length of the write up for Stefan Schubert). I doubt I'll go all the way to making them as comprehensive as Max's. 
Pros:

  • Particularly useful for donors to the fund and potential applicants to get to know the reasoning processes grant makers when we've just joined and haven't yet made many grants
  • Getting feedback from others on what parts of my reasoning process in making grants seem better and worse seems more likely to be useful than simply feedback on 'this grant was one I would / wouldn't have made' 

Cons:

  • Time writing reports trades against time evaluating grants. The latter seems more important to me at the current margin. That's partly because I'd have liked to have decidedly more time than I had for evaluating grants and perhaps for seeking out people I think would make good grantees.
  • I find it hard to write up grants in great detail in a way that's fully accurate and balanced without giving grantees public negative feedback. I'm hesitant to do much of that, and when I do it, want to do it very sensitively.

I expect to try to ... (read more)

5
Max_Daniel
While I'm not sure I'll produce similarly long write-ups in the future, FWIW for me some of the pros of long writeups are: * It helps me think and clarify my own views. * I would often find it more time-consuming to produce a brief writeup, except perhaps for writeups that have a radically more limited scope - e.g., just describing what the grant "buys", but not saying anything about my reasoning for why I thought the grant is worth making.
  1. What processes do you have for monitoring the outcome/impact of grants?
  2. Relatedly, do  the EAIF fund managers make forecasts about potential outcomes of grants?
    • I saw and appreciated that Ben Kuhn made a forecast related to the Giving Green grant.
    • I'm interested in whether other fund managers are making such forecasts and just not sharing them in the writeup or are just not making them - both of which are potentially reasonable options.
  3. And/or do you write down in advance what sort of proxies you'd want to see from this grant after x amount of time?
    • E.g.,
... (read more)
2
Buck
I am planning on checking in with grantees to see how well they've done, mostly so that I can learn more about grantmaking and to know if we ought to renew funding. I normally didn't make specific forecasts about the outcomes of grants, because operationalization is hard and scary. I feel vaguely guilty about not trying harder to write down these proxies ahead of time. But I empirically don't, and my intuitions apparently don't feel that optimistic about working on this. I am not sure why. I think it's maybe just that operationationalization is super hard and I feel like I'm going to have to spend more effort figuring out reasonable proxies than actually thinking about the question of whether this grant will be good, and so I feel drawn to a more "I'll know it when I see it" approach to evaluating my past grants.
Curated and popular this week
Relevant opportunities