Hide table of contents

 

In our EA Funds launch post, we noted that:

 

[I]n the future we hope to encourage new fund managers to create new funds with different focus areas than the current options.

 

As our three-month trial draws to a close we’re now thinking more seriously about adding new funds to EA Funds. However, there are a number of open questions that would determine how many funds we might add, which funds might be added, and how quickly we’d be able to add new funds. I outline the relevant open questions as I see them below.

 

CEA plans to discuss adding new funds during our team retreat after EA Global: Boston. The goal of this post is to get feedback on these question from the community to help inform that discussion. Please provide feedback in the comments below. If you’re attending EA Global: Boston you can also grab me for a quick chat there.

 

Below I present each open question, try to explain the full range of options available, and then outline some of the considerations that I think are relevant in addressing the question. The goal is to remain neutral on the answer while still providing relevant information. The inclusion of an option or a consideration does not necessarily imply endorsement of that option or consideration by me or by others at CEA.

 

If you think there are open questions to address that I have missed, please feel free to suggest them in the comments.

 

Open Questions

 

Question 1: Should we add new funds? If so, when?

The first question is whether we should add new funds at all and, if so, on what timeline should we add them? Part of this answer depends on how much money is moving through EA Funds. For reference, EA Funds has processed $775,000 so far with $31,000 in monthly recurring donations. We expect the pace of growth in the near future to be slower than it was in the first three months as the initial buzz around EA Funds dies down.

 

 

Potential options

Don’t add new funds

The first option is that we shouldn’t add new funds at all. For example, we might want to tweak the existing funds by selecting new fund managers or by having multiple people manage certain funds, but we might not want to expand past a small number of funds that represent the most widely-supported causes.

 

Add new funds, but later

We might want to add new funds, but only after EA Funds has a longer track record or has reached certain milestones. For example, we might only want to add funds after a year, or once we’ve moved a certain amount of money, or once we’ve reached a certain amount of money in monthly recurring donations.

 

Add new funds now

Finally, we might opt to add new funds very soon.

 

 

Considerations

Future growth of EA Funds

Adding new funds depends, at least in part, on how much money might be available to support the funds which depend in turn on EA Fund’s future growth prospects.

 

This is hard to determine, but here are some guesses. First, I don’t expect us to raise as much money over the next three months as we did over the initial three months. Much of the money we do raise will be driven by the $31,000 in monthly recurring donations that have already been set. However, it is unlikely that donors with recurring donations will change their allocation to include new funds. This means that it may be relatively difficult to move significant amounts of money through new funds in the short term.

 

On the other hand, the user base of EA Funds is still relatively small (around 665 unique donors), so there may be significant low-hanging fruit in getting people already involved in EA to consider using the platform. Additionally, adding a fund that meets an as-yet unmet demand could cause additional money to flow through the platform in a way that doesn’t cannibalize existing funds.

 

Viewpoint diversity

All of our current funds are run by GiveWell/Open Phil staff members. As we’ve stated in the past, we aim to have 50% or less of the program officers work at GiveWell/Open Phil. Adding more funds seems like the most plausible way to achieve this goal.

 

Reputation

Adding new funds that are significantly worse than the existing options might harm the reputation of EA Funds, CEA, and EA in general. Conversely, adding high-quality funds in new areas may improve the reputation of EA by further showcasing the ability of the EA community to find interesting ways of improving the world.

 

 

Question 2: What kinds of funds should we add?

Our existing funds each focus on a single broad cause area that EAs have historically supported. The existing funds were designed to give fund managers relatively wide latitude to decide what use of funds is best while also making it clear to donors what the funds might donate to.

 

One question for the future is whether we should expand EA Funds by adding new funds in new cause areas or whether we should expand by adding new funds built around themes other than cause areas.

 

Potential options

Below are some options for the kind of funds we might add. Keep in mind that these options are not mutually exclusive, so we could include multiple plausible options.

 

New funds in new cause areas

We could simply add new funds in new cause areas. These would operate similarly to the existing funds.

 

New funds in existing cause areas

We could add funds in existing cause areas that have the same scope as the current funds. For example, we could add a second fund in global health and development which has the same scope as the fund managed by Elie, but which is managed by someone else.

 

Fund manager’s discretion

We could add funds that give the fund manager wide latitude to recommend a grant to whatever they think is best regardless of cause area.

 

Different approaches to existing causes

We could add funds that take some different approach to existing cause areas. For example, we could add a fund in global health and development that focuses on high-risk, high-reward projects (e.g. startups, funding evaluations rather than direct interventions), or we could add a long-term future fund that focused on areas other than AI-Safety.

 

Funds based on particular tactics

We could add funds which are focused on particular tactics instead of cause areas. For example, we could add a fund which donates only to startups or which funds research projects. These funds could operate across a variety of cause areas.

 

Funds based on normative disagreements

We could add funds which are based on specific normative disagreements. For example, we could have a fund which focuses predominantly on improving (and not necessarily saving) lives or a fund which focuses on reducing suffering.

 

Considerations

The chicken-and-egg problem for new causes

For a fund in a new cause area to succeed it needs both money and high-quality projects to support with that money. This presents different problems for EA Funds than those faced by large funders with an endowment like Open Phil. In Open Phil’s case, since it already has the money, it can declare an interest in funding some new area and then use the promise of potential funding to cause people to start new projects. If no projects show up, it can simply redirect the money to other projects.

 

However, in EA Funds, the ability of a fund to attract money is partially dependent on the existence of promising projects to fund (since a fund without plausible grantees will have a hard time getting donations). This means that EA Funds may find it difficult to catalyze activity in completely novel areas.

 

Clarity

It should be relatively easy for donors to figure out what they’re supporting if they donate to a fund. For donors willing to research, the fund page should be sufficient to help them understand each fund.

 

However, not all donors will carefully read the fund pages and many donors will choose what fund pages to review based on the name and perhaps a short description of each fund. While we hope donors will look at the details of each fund, realistically it may be the case that the name of each fund alone will have a disproportionate effect on whether people choose to support it.

 

Fund names should satisfy two goals:

 

  1. The name should make it clear what the fund is likely to support.

  2. The name should make it clear how the fund is different from the other available funds.

 

However, some options for adding new funds present greater clarity challenges than others. For example, funds in the same cause area as existing funds will present a particular challenge in choosing names that make it easy to understand how the funds differ. Similarly, funds that operate at the fund manager’s discretion will be difficult to name in a way that makes it clear what the fund is likely to support.

 

Expanding EA’s intellectual horizons

Adding funds in areas outside of global health, animal welfare, long-term future, and EA community would help expand the intellectual horizons of EAs and help us find new promising cause areas.

 

 

Question 3: How should we vet new funds?

Our current funds represent problem areas that we think are especially promising, have wide community support, and are run by fund managers that we think have strong knowledge and connections in the fund area. We could attempt to ensure that any new funds adhere to similar standards or we could substantially open the platform up and allow anyone (or nearly anyone) to create a fund of their own.

 

Below I try to outline a continuum of plausible options for the degree to which we ought to vet new funds. I then outline some considerations that are relevant for deciding where we ought to fall along this continuum.

 

Potential options

No vetting

On one extreme end of the continuum, we could let anyone create a fund which they manage however they want and which anyone can donate to. To add slightly more quality controls we could require certain kinds of reporting and require some standard set of information for the fund page of each fund.

 

Democratic vetting

We could let anyone create a fund, but only keep funds that receive a certain amount of support from the community (e.g. donations or “votes” of some kind). We could instead let anyone propose a fund, but only accept some small number of funds as determined by community support (e.g. pledges to donate).

 

Plausibility vetting

We could let anyone propose a fund, but then have CEA (or some set of trusted researchers) review the funds and reject any funds which we think are not plausibly a good candidate.

 

The precise definition of “plausibility” in this context is up for grabs, but the goal would be to reject only the funds and fund managers which seem like especially poor options. The process could use some method of democratic vetting to further narrow down the field from among the plausible options.

 

“Reasonable-person” vetting

Using the process described above, we could apply a more strict “reasonable person” standard. The goal would be to only accept funds which a reasonable person might think are better than some benchmark. For example, we could only allow funds which a reasonable person might think are better than AMF or better than the existing funds. Anyone could propose a fund and then this standard would be applied or proposing a fund could be an invite-only process.

 

“Better than” vetting

Finally, we could only accept funds that CEA (or some set of trusted researchers) think are better than the existing options for some criterion of betterness. This is different from the reasonable person standard because it requires that we think the fund is actually better than the existing options, not that we could see how someone might think that the fund is better.

 

Hybrid options

We could also combine multiple approaches to form hybrid options. Some rough ideas for how we might do this are below:

 

  • Start closed and open up over time

    • We could start by vetting funds very close for the first few rounds of adding new funds and we could decrease the vetting requirements over time.

  • Low vetting plus nudges

    • We could provide very little vetting for creating a fund, but nudge users towards the funds that we think are most promising. For example, the default [allocation page](https://app.effectivealtruism.org/donations/new) could only include highly promising funds and less promising options could be made less immediately obvious.

 

Considerations

Below are some considerations that might factor into the decision of how closely to vet new funds. These are presented in no particular order.

  

Inclusion in EA Funds as a nudge

User behavior so far suggests that many people choose to split their donation among several funds instead of donating all of their money to a single fund. This suggests that donors see inclusion in EA Funds as a sign of quality and that a fund’s inclusion nudges people to donate to causes they might not have given to otherwise. This was also born out in some Skype conversations we had with early users.

 

This increases the potential for new funds to cause harm by attracting money that might have been better spent elsewhere.

 

Administrative costs

Each fund adds some small, but nontrivial administrative cost to CEA.

 

For each fund, CEA needs to communicate with the fund manager regularly about the amount of money available, whether they have new grant recommendations, and about posting updates to the website. We also incur administrative costs every time a grant is made as we need the trustees to approve the grant and we need to work with the charity to get them the money. We could probably develop systems to decrease administrative costs if the scale of the project required this, but we likely wouldn’t be able to do this in the short term.

 

Reputation

Lower-quality funds might harm the reputation of EA Funds, CEA, and EA in general.

 

Recruiting high-quality fund managers

Low-quality funds might make it harder to acquire (and retain) high-quality fund managers as being associated with the project becomes less prestigious.

 

Researcher recruitment

One source of value from EA Funds is that it might help incentivize talented researchers to do high-quality work on where people ought to donate. Lower barriers to entry in setting up a fund might increase the pipeline of researcher talent that EA Funds helps create.

 

Funding externally controversial projects

One affordance we’d like for EA Funds to have is funding high impact, but externally controversial projects.

 

Plausibly, the more funds we have, and the more EA Funds is an open platform, the less the actions of a single fund will negatively affect the platform as a whole. So, we might have more affordance to fund controversial projects by adding more funds.

 

New funds and acquiring new users

It seems plausible that more funds would make it easier to attract more users for two reasons. First, when someone sets up a fund they will likely reach out to their network to get people to donate which may help us acquire users. Second, the more variety we offer the more likely it is that donors find funds that they strongly resonate with.

 

The marketplace of ideas

Lower barriers to entry would promote a more open and thriving marketplace of ideas about where people should donate.

 

Expertise

EA Funds was conceived as a way of making individual’s donation decisions easier, by allowing them to draw on the expertise of people or groups who have greater subject-matter expertise and are more up-to-date with the latest research on their Fund’s topic, current funding opportunities in the space, and organizational funding constraints. There is a tradeoff between creating fewer new funds that are genuinely expert-led, and a greater number of funds where the average level of expertise is lower.

 

 

Conclusion

This post has attempted to describe some of the open questions on EA Funds and the relevant considerations as a way to solicit feedback and new ideas from the EA community. I look forward to a discussion in the comments here and in person for anyone at EA Global: Boston this weekend.

 

The next steps for this process are for me to review comments to this post and to discuss the topic with the rest of the CEA team. Afterward, I plan to write a follow-up post that outlines either the option we selected and why or the options we're currently deciding between. If you have thoughts that you'd prefer not to share here, feel free to email me at kerry@effectivealtruism.org.

 

Please note that due to EA Global: Boston, CEA staff might be slower to respond to comments than usual.

 

 

 

 

Comments22
Sorted by Click to highlight new comments since:

Just wanted to mention that I thought this was a really good post. I think it did a good job of asking for community input at a time where it's potentially decision relevant but where enough considerations are known that some plausible options can be put forth.

I think it also did a good job of describing lots of considerations without biasing the reader strongly in favor/against particular ones.

Thanks for this Kerry, very much appreciate the update.

Three funds I'd like to see:

  1. The 'life-improving' or 'quality of life'-type fund that tries to find the best way to increase the happiness of people whilst they are alive. My view on morality leads me to think that is what matters most. This is the area I do my research on too, so I'd be very enthusiastic to help whoever the fund manager was.

  2. A systemic change fund. Part of this would be reputational (i.e. no one could then complain EAs don't take systemic change seriously) another part would be that I'd really like to see what the fund manager would choose to give money too if it had to go to systemic change. I feel that would be a valuable learning experience.

  3. A 'moonshots' fund that supported high-risk, potentially high-reward projects. For reasons similar to 2 I think this would be a really useful way for us to learn.

My general thought is the more funds the better, presuming you can find qualified enough people to run them. It has the positive effect of demonstrating EA's openess and diversity, which should mollify our critics. As mentioned, it provides chances to learn stuff. And it strikes me as unlikely new funds would divert much money away from the current options. Suppose we had an EA environmentalism fund. I assume people who would donate to that wouldn't have been donating to, say, the health fund already. They'd probably be supporting green charities instead.

Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I'm likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that's okay.

Some possible axes:

  1. live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
  2. safe bets vs. moonshots
  3. suffering-focused vs. "classical"
  4. short-term vs. far future

Although having all possible combinations just along these axes would require 16 funds so in practice this won't work exactly as I've described.

[anonymous]2
0
0

I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense).

Great idea. This makes sense to me.

Yup! I've always seen 'animals v poverty v xrisk' not as three random areas, but three optimal areas given different philosophies:

poverty = only short term

animals = all conscious suffering matters + only short term

xrisk = long term matters

I'd be happy to see other philosophical positions considered.

mostly agree, but you need a couple more assumptions to make that work.

poverty = person affecting view of population ethics or pure time discounting + belief poverty relief is the best way to increase well-being (I'm not sure it is. See my old forum post

Also, you could split poverty (things like Give Directly) from global health (AMF, SCI, etc.). You probably need a person-affecting view or pure time discounting if you support health over x-risk, unless you're just really sceptical about x-risks.

animals = I think animals are only a priority if you believe in a impersonal population ethic like totalism (maximise happiness over history of the universe, hence creating happy life is good), and you either do pure time discounting or you're suffering focused (i.e. unhappiness counts more than happiness)

If you're a straightforward presentist (a person-affecting population ethic on which only presently existing things count), which is what you might mean by 'short term'. You probably shouldn't focus on animals. Why? Animal welfare reforms don't benefit the presently existing animals, but the next generation of animals, who don't count on presentism as they don't presently exist.

Good point on the axes. I think we would, in practice, get less than 16 funds for a couple of reasons.

  1. It's hard to see how some funds would, in practice, differ. For instance, is AI safety a moonshot or a safe bet if we're thinking about the future?

  2. The life-saving vs life-improving point only seems relevant if you've already signed up to a person-affecting view. Talking about 'saving lives' of people in the far future is a bit strange (although you could distinguish between a far future fund that tried to reduce X-risk vs one that invested in ways to make future people happier, such as genetic engineering).

[anonymous]1
0
0

Hey Michael, great ideas. I'd like to see all of these as well. My concern would just be whether there are charities available to fund in the areas. Do you have some potential grant recipients for these funds in mind?

Hello Kerry. Building on what Michael Dickens said, I now think the funds need to be more tightly specified before we can pick the most promising recipients within each. For instance, imagine we have a 'systemic change' fund, presumably a totalist systemic change fund would be different from a person-affecting, life-improving one. It's possible they might consider the same things top targets, but more work would be required to show that.

Narrowing down then:

Suppose we had life-improving fund using safe bets. I think charities like Strong Minds and Basic Needs (mental health orgs) are good contenders, although I can't comment on their organisational efficiency.

Suppose we have a life-improving fund doing systemic change. I assume this would be trying to bring about political change via government policies, either at the domestic or international level. I can think of a few areas that look good, such as mental health policy, increasing access to pain relief in developing countries, and international drug policy reform. However, I can't name and exalt particular orgs as I haven't narrowed down to what I think the most promising sub-causes are yet.

Suppose we had a life-improving moonshots fund. If this is going to be different the one above, I imagine this would be looking for start ups, maybe a bit like EA Ventures did. I can't think of anything relevant to suggest here apart from the start up I work on (the quality of which I can't hope to be objective about). Perhaps this fund could be looking at starting new charities too, rather than looking to fund existing ones.

I don't think not knowing who you'd give money to in advance is a reason not to pursue this further. For instance, I would consider donating to some type of moonshots fund precisely because I had no idea where the money would go and I'd like to see someone (else) try to figure it out. Once they'd made their we could build on their analysis and learn stuff.

I really like the idea of doing more to identify new potential cause areas. Vetting is really important, but I'm wary of the idea of anointing a specific EA org with sole discretion over vetting decisions. If possible, democratic vetting would be ideal (challenging though such arrangements can be).

One option is to split the EA Community Fund into a Movement/Community Building Fund (which could fund organizations that engage in outreach, support local groups, build online platforms etc.) and a Cause/Means Prioritization Fund (which could fund organizations that engage in cause prioritization, explore new causes, research careers, study the policy process etc.).

[anonymous]3
0
0

This is an interesting idea. I have a few hesitations about it, however:

  1. The number of organizations which are doing cause prioritization and not also doing EA Community Building is very small (I can't think of any off the top of my head).
  2. My sense is that Nick wants to fund both community building and cause prioritization, so splitting these might place artificial constraints on what he can fund.
  3. EA Community building has the least donations so far ($83,000). Splitting might make the resulting funds too small to be able to do much.

RE #1, organizations doing cause prioritization and not EA community building: Copenhagen Consensus Center, Foundational Research Institute, Animal Charity Evaluators, arguably Global Priorities Project, Open Philanthropy Project (which would obviously not be a good place to donate, but still fits the criterion).

RE #2: if the point is to do what Nick wants, it should really be a "Nick Beckstead fund", not an EA Community fund.

There are also independent EA researchers doing cause prioritization research without community building.

[anonymous]0
0
0

RE #2: if the point is to do what Nick wants, it should really be a "Nick Beckstead fund", not an EA Community fund.

The fund is whatever he thinks is best in EA Community building. If he wanted to fund other things the EA Community fund would not be a good option.

But how is funding cause prioritization related to EA community building?

I do see some advantages of keeping the number of funds low at this amount of money moving through because it increases the chance that any one particular fund will be able to support a particularly promising project that isn't appreciated by other donors.

[anonymous]0
0
0

Great point.

A different option for handling this concern would be for us to let fund managers email the EA Funds users if they have a good opportunity, but lack funding.

It would be nice to see a fund dedicated toward research, especially empirical research, to gather information relevant to EA objectives.

Was thinking that there could be a tie-in with Giving What We Can's My Giving. You could tick a box to make your My Giving profile public, and then have another box for people browsing to "copy this donor's distribution of donations" like some trading websites (such as eToro) offer. Although they would not, unfortunately, come with tallies of expected total utilons produced, there could be league tables of most copied donors by number of people copying, and amount donated following their distribution.

or we could add a long-term future fund that focused on areas other than AI-Safety.

+1 differentiation. A Fund specifically for AI Safety would probably have demand - I'd donate. Other Funds for other specific GCRs could be created if there's enough demand too.

A mild consideration against would be if there are funding opportunities in the Long Term Future area that would benefit both AI Safety and the other GCRs, such as the cross-disciplinary Global Catastrophic Risks Institute, and splitting would make it harder for these to be funded, maybe?

I'm excited about the idea of new funds. As a prospective user, my preferences are:

  • Limited / well-organised choices. This is because I, like many people, get overwhelmed by too many choices. For example, perhaps I could choose between global poverty, animal welfare, and existential risks, and then choose between options within the category (eg "Low-Risk Global Poverty Fund" or "Food Security Research Fund").

  • Trustworthy fund managers / reasonable allocation of funds. There are many reasonable ways to vet new funds, but ultimately I'm using the service because I don't want to have to carefully vet them myself.

Curated and popular this week
Relevant opportunities