Hide table of contents

This essay was submitted to Open Philanthropy's Cause Exploration Prizes contest.

If you're seeing this in summer 2022, we'll be posting many submissions in a short period. If you want to stop seeing them so often, apply a filter for the appropriate tag!

 

Summary: 

  • The total amount of altruistic contribution is large while a small amount of that contribution is effective. It means there is a huge potential for improving donations and other altruistic behaviors of people outside EA movement.
  • The current EA outreach strategy of persuading people to do as much good as possible may not engage properly for many reasons that prohibit higher impact in donations.
  • It can be effective to invest some resources in improving the altruistic decision-making of a large group of people to be a little more effective.
  • There should be more academic research to determine which EA outreach strategy is more effective in changing altruistic behaviors.
  • Investing in the general public’s altruistic decision-making does not mean that we change EA movement to be “big tent movement”

Disclaimer: As this essay is written for Open Philanthropy contest, some of the analyses here specifically aim at Open Philanthropy. However, I think it is also valuable for other actors who share similar characteristics. 

What is the problem? 

The US total donation for one year reached $484.85 billion[1], and the British people are estimated to donate £11.3 billion a year.[2] Money donated to EA only accounts for $420 million which is just a small fraction of US/UK donations.[3] It means a large share of donations does not follow EA suggestions. There are several reasons for inefficiency, and the current approach of EA community can only solve some of them. 

  • Behavior biases and lack of information/awareness make choices suboptimal. Examples can be donations diversification/unnecessary risk-averse in donations. Lack of information related to cause areas or charities' effectiveness makes people donate to charities that are more familiar to them and tend to be less effective.
  • Donors can have different moral concerns that are not impartial. They can feel moral obligations toward specific groups, for example, prioritize people within one country or people currently alive.
  • People donate because of individual incentives other than altruism – e.g making them look good through signaling that they care about certain causes/have empathy. Many prefer the cause areas to be relatable to them, in which the causes are more familiar, or they have more emotional connections.

The current approach of EA community is to promote donations to the most effective cause or to recommend careers solving the most important problems. This approach helps people who already have moral alignment with the movement or have individual fit to improve their altruistic effectiveness significantly. However, that strategy is likely to set a high barrier for people outside the movement like requirements of impartiality, cause neutrality, and pure altruism. Even if we try a softer approach that makes people join EA slowly, there can be misconceptions that make people want to join EA in the beginning because they think it is just about donating more efficiently, but later lose excitement when they know more about EA.[4] The barriers of requirement can’t solve the causes of inefficiency listed above because:

First, even if (and many people) in EA community follow the main principles of EA, they still feel struggle to contribute. It means that it is even harder for the common people to follow what EA wants people to do even if they believe in EA principles:

  • We already see a lot of criticism of “elitism” within EA: Many people feel they are not good enough to work directly in cause areas that are suggested by EA like AI-risk research and the only option is earning to give. It may make average people feel EA is only for the extremely talented and they are not good enough to be part of the movement.[5][6] However, these people can perfectly work on some less important causes but that’s still the most efficient option for them because of personal fit.
  • Information asymmetry and lack of trust: people know more about cause areas familiar with them and may not have any knowledge at all about important causes like AI risk. Because of that, even if people hear about the cause later, it is difficult for them to verify that the cause area is important when they lack trust or connections/knowledge about the cause area. Any group can preach some causes to people as the most important causes, and hence it is difficult for people to trust what EA preach to them as more true compared to the others.

Second, many people do not have values aligned to EA – they don’t follow impartiality, cause neutrality, or pure altruism: 

  • People don’t feel they have the obligation to give, but just want to give because it makes them feel good. If people have no obligation (and some EA definitions have no claim about what obligations a person has – as the Co-founder of OP claims “We’re excited by the idea of making the most of our resources and helping others as much as possible.”[7]) they can help others in the way that they enjoy the most, and that is likely to be the case for a lot of people who feel more enjoyment from donating to local causes than to some charities in Africa, or people who want to have good feeling about themselves when of contributing to the cause (warm-glow effect[8]).[9] Then it would be difficult to agree that there is no obligation to help and still preach that they should donate to the most effective cause.
  • People may have other moral obligations that they feel that they need to follow. If these moral obligations truly exist, promoting these people to move away from their obligations can create moral harms. However, I would not focus on this topic here and let the moral philosophers discuss the issue further. The problem here is if these people strongly believe in these obligations, they may not change their  minds due to EA promotion of certain worldviews.

Some of the issues with EA outreach listed above are quite similar to some recent discussions about EA movement should represent more people (e.g “big tent movement”) instead of keeping it to the more restrictive definition. On the other hand, some other think EA movement should be selective. I do partially agree with the concerns of some who think EA should still keep some level of selectiveness/requirements and inviting a lot of people to join like a “big tent movement” is likely to make EA compromise some important values.  

The cause area and the solutions I listed in the next section are not affected by the concern of whether EA should become “big tent movement”.  Persuading people to be more efficient in their altruistic behaviors can be independent from persuading them to join EA movement. It means we don’t have to compromise EA values if the concern is significant, and we do not have to consider people who follow our advice to be part of EA movement. I believe doing some investment in improving people’s altruistic decision-making would not change the public perception of EA, e.g people can improve their donations after reading some information from an EA charity evaluator without thinking much about the relation of that charity evaluator with EA.

While we do not/cannot change these people to align completely with EA, EA movement still miss a huge opportunity of not trying to have some influence on the general public even if we do not include them in the movement.

While the public may have some pushback against EA ideas, that is changeable, and people can still agree that they should donate more effectively or choose high-impact careers which are within their acceptance perception. With more information people still prefer to choose more effective charities that they trust: 

  • Moral views are diverse, and each individual is open to different levels of moral persuasion. Some can be open to cause-neutrality within local communities but not care about what happens far away, others may be open to put more moral weight on human existence in the far future but not on animals, and therefore personalized moral persuasion may be useful – we can choose to communicate only the moral view that people want to accept.
  • With more information on charities' effectiveness, people are more willing to support more effective charities within their concerns or choose higher-impact careers within the field they like.

Who is working on the problem?

Within EA community, Sogive is the only charity evaluator that has the goal of analyzing a broad range of charities and causes, including the ones which are not EA-aligned. Within Animal Welfare, Animal Charity Evaluator has a primitive evaluation of a broad number of charities. We also see Probably Good provides career guidance for a broader set of people.[10]

Besides EA, there are several other charity assessment organizations like Charity Navigator, however, their main criteria for assessing charities are not effectiveness, which they have only included recently. 

Some EA communities (especially in developing countries) spend resources on some local causes which may not be the most important causes, such as EA Philippines or EA Israel as there are preferences for local causes from people in that region. 

On the research side, there are some EA-related researchers who do research about donating effectiveness/behaviors like Stefan Schubert (his talk: “Why aren’t people donating more effectively?”[11]). GPI also included researching on people donating behaviors to be part of their research agenda. Besides that, economics research/literature related to altruistic behaviors also developed for a while. While these research is not directly related to EA outreach, they still help to understand donation behaviors better.

Possible interventions and cost evaluation

Research

OP can fund research related to altruistic decision making by opening new competitions or Open Philanthropy can also support for Effective Thesis to focus more on the cause area by e.g having prizes for people who work in the cause area. 

Research related to donors’ preferences may not require high expertise. Compared to researching in EA high-priority cause which likely to require a PhD/fieldwork like poverty reduction or AI safety, outreach strategy can do by much lower qualified students as the information/research environment are much more accessible. For example, research on which causes/reason donors want to donate, in which circumstances they will change their preferences, etc can be done by bachelor students in marketing, communications, or economics. 

The cost of this intervention is limited by the size of prize/funding that OP is willing to give to students – which is very low given the current Effective Thesis prize is only 1000$.[12]

For more theoretical research (e.g GPI) OP can spend some resources to double check whether lower-level research is credible or not. 

Funding new charity evaluators/careers advice services or supporting current ones

As there are quite limited numbers of charity evaluators working on broad types of charities, OP can fund new charity evaluators/career advice services through Charity Entrepreneurship or help current organizations expand their services:

  • OP can provide financial support for current EA evaluators to expand their analysis on more cause areas, especially ones that are more well-known within the general population. Evaluators should provide comparison within one sector, and allow readers to use different assumptions on weighing between different values/outcomes (e.g different moral weights).
  • Career supports may allow more traditional jobs which are less effective than working at top EA organizations but are more accessible, same as what Probably Good is doing.

The cost of this intervention is likely to be limited to the set up of new organizations. Both of these suggestions require fewer high-talented people because the charity causes/careers are more common or easier for people with average talent to work in, hence it is easier to get information about the career paths. It also allows more people from different fields to contribute information.

After the organizations are set up successfully or expand properly, it may not crowd out human resources from other EA projects because it can use people outside top talented who currently haven’t had the chance to contribute directly to EA causes.

As these kinds of organizations also align with other worldviews, they can get funding from other non-EA sources. Because of this, in the best-case scenario, they can self-sustain financially and hence don’t further use financial resources within EA movement.

OP can also use their influence to advocate/negotiate with current well-known charity evaluators which do not part of EA movement to promote effectiveness as a main evaluating criterion and support them to assess the effectiveness of charities. The softer approach listed here is more likely to get accepted by these charity evaluators compared to the current EA approach as they already criticized EA movement[13] but still agree to put effectiveness to be one of their criteria.   

Advocacy

OP can support media/journal like Vox Perfect Future to address more causes that are relevant to common people (which I think they already did to certain extent). 

We can also support people to go to the public sector and advocate for change in regulations related to charities, e.g require public information about effectiveness, and require external audit about the effectiveness of their activities. 

Benefit evaluations of interventions

Direct benefits

The level of increase in impact can be significantly lower than moving to top charities or just a bit lower, depending on how much one charity is more efficient than the others and how much people change their behaviors. The general belief in EA is the difference in the effectiveness of causes can be up to 1000x, while some other estimates are lower, like at most 10-100x.[14] If that is the case, then the benefit of persuading people to move to the most effective causes and persuading them to donate to “more acceptable causes” do not differ significantly - the difference in benefit is less likely to outweigh the benefit of having more people change their behaviors if we choose the latter approach.   

If investing in this cause can move donations to local causes to Oxfam which estimate to benefit the lower 25% percentile in a developing country (e.g Vietnam) with an annual income of 1000$, for each 50$ invested in the cause we need to move 10000$ which I think is reasonable for investing in building up new charity evaluators.[15]

Build up literature (Research intervention)

By supporting low-level research we already help to build up the literature base for the field and spread more awareness about the topic within academia. People are more willing to research if they find past research to help develop their idea, feel less risky about the topic they want to research, or just because they find enough sources to cite. 

Worldview Diversification 

As we increase efficiency in a broad range of causes, including causes that do not traditionally belong to EA, we already make contributions to a lot more causes and different worldviews compared to directly investing in certain causes. 

Increasing efficiency within each cause rather than focusing on a few most potential causes can align better with Open Philanthropy as the organization supports Worldview Diversification.[16] 

Spread norms of effective giving and increase donations market efficiency

Making people give more effectively through persuading or allowing people to have comparisons easier can create secondary impacts by making them more open to EA principles: as people start to weigh different charities and metrics, they are more likely to question their weighing principles or shift their Overton windows to accept more unfamiliar charities when effectiveness become more of their main concern, or likely to achieve Minimum Viable Beliefs/Behaviors[17] that make them align with EA later. 

More people concern about effectiveness will solve altruistic market inefficiency, especially as in status quo charities can focus a lot on emotions or other metrics that are often valued by donators (e.g low expense) which reduces the incentive for charities to focus on effectiveness and the most effective ones may not be able to attract donors. If people care more about effectiveness, they now create more incentives for charities to focus on effectiveness. More effective charities are also likely to become bigger when they receive more funding from people who care about effectiveness, and that makes these charities more well-known and more likely to get donations even from people who don’t care about effectiveness but just want to donate to famous organizations. 

Comparing effectiveness with other cause areas

As it is difficult to evaluate the benefit of outreach to social returns, I compare the benefit of investing in this cause area compared to existing EA community development projects funded by OP or cause area that OP is interested in, assuming that these fundings already satisfy the 1000x social returns requirement from OP.

Comparing with cause area advocacy for US foreign aid[18]: both cause areas consider moving money from the general population to a better cause. While the group of people targeted in advocacy for foreign aid is smaller, they are a lot higher profile and therefore require people with high influence, hence likely to add up to the talent constrain that EA movement currently has. 

Comparing with past grants from OP to debating competition: There were 2 grants from OP to debate community – the community that share a lot of similarity with EA ideas but does not fully align with EA. The grants to the community can work in a similar way to investing in this cause area as they help to spread EA idea to a general community with many different viewpoints. 

Uncertainties/ Option value

Estimating benefit quantitatively is very difficult due to the current lack of data/research. I hope there will be some small-scale research to see the cost of e.g persuading a person to EA and persuading a person to change their donations to a higher effectiveness organization but not belong to EA movement. 

The common argument against dealing with the general public is resources constrain. I agree that because of resource constrain, we should not spend a significant amount of resources to solve this problem, but I still think we should spend some resources because of the current high marginal impact. 

In this essay, I list some reasons, which are quite speculative, in support of the topic. If it seems to be too uncertain to invest in this cause area, there is still a huge option value: it can still be beneficial to spend some resources to research or test out the idea. We may lose a bit of money if it fails, but gain a lot if the idea works and we can invest more money in the cause. 
 

 

  1. ^
  2. ^
  3. ^
  4. ^
  5. ^
  6. ^
  7. ^
  8. ^
  9. ^
  10. ^
  11. ^
  12. ^
  13. ^
  14. ^
  15. ^

    Rough estimate calculated from using 50$ to increase 100 people’s income by 1% based on OP criteria https://www.openphilanthropy.org/research/technical-updates-to-our-global-health-and-wellbeing-cause-prioritization-framework/ 

  16. ^
  17. ^
  18. ^

11

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.