Hide table of contents

"It is difficult to get a man to understand something when his salary depends upon his not understanding it.”

The best outcome for a charity is for the problem that it aims at fixing to be finished. But for all of the people involved with the charity, who gain value from their involvement (most clearly money, but also things like utils from being part of a community), an ultimate solution to the problem that the charity tries to fix would not be a good thing. They would have to find a new source of money/utils. We might expect that EAs would be less likely to fall victim to this trap, but in real life most EAs have other priorities than EA, so the potential to (most likely unconsciously) avoid solutions that are too effective is still present.

This isn’t an issue which admits easy answers, but that isn’t a reason to avoid the problem entirely. There are still things we can do to try to better align incentives.

Suggestion 1: Improve social recognition of work outside of EA organisations

EA should not be about the money, and we don’t want people to leave the community when they can no longer make money out of it. To help make sure that this is the case, we should look at ways that people can be socially rewarded for doing effective work outside of EA organisations.

For example, there are many EAs who work in government, and there is general acknowledgement within the community that in many cases the opportunities available in these positions simply do not exist outside of governments. Furthermore, these opportunities are generally not dependent on private EA funding to exist; the main value that the EA community can provide to people in this position is support through advice/suggestions for impact. Despite this acknowledgement, there are very few people government workers who one could classify as high-status EAs.

Put another way, an employee of an EA organisation producing unsolicited advice for government about pandemic prevention would likely have greater EA status than a government employee working on pandemic prevention policy.

A similar thing could be said for EAs working on personal projects that don’t require them to be part of an organisation or receive funding at all. Organisations require administrative paperwork and overhead, and if you can do something yourself without this, then that should be the preferred option.

Suggestion 2: Develop a norm against long-term EA projects and long-term employment in EA

As EAs, we know that people who receive cash transfers (e.g. GiveDirectly) generally do smart things with them. However, in almost all cases these are time-limited transfers. (Even if participants are told that the transfers will last for the rest of their lives, they would have good reason to doubt whether the organisation making this promise will be in a position to follow through with this 30 years into the future, if it exists at all.) If the cash transfers were guaranteed for a lifetime, the motivation to make smart decisions is less – if I make bad decisions with this month’s money, it’s not that important because I’ll get one next month, and every month after that.

The same analogy applies to time. Work expands to fill the time available. If I only have a limited time to do good, and I want to do as much good as I can, I will work really hard during that time to maximise the good that I can do with the limited opportunity that I have been given. If I can reasonably expect to have a long career in EA, or if my EA project can reasonably expect to indefinitely receive funding, then there is no urgency, and I can take my time and become complacent and inefficient in doing good.

This is not a good outcome for EA and a natural way to solve this problem is term limits, which are a feature of many government and community associations.

Term limits within EA would only apply to organisations applying for EA funding and individuals applying for employment with EA organisations. Individuals’ participation in EA as a community should not be limited by time in any circumstances.

The term limits would not (and logistically could not) be formal, but would operate more on the level of community expectations – if a funder receives an application from an organisation which has continually asked for funding for the previous (e.g.) 10 years, they should consider not refunding unless exceptional circumstances exist, and organisations could apply a similar process for employee recruitment.

We certainly don’t want to lose people and organisations who are performing difficult to replace work, which is why exceptions should always be an option. However, having irreplaceable people is an organisational weakness and should be minimised where possible (e.g. by documenting procedures and encouraging knowledge sharing). Executed properly, one would expect the number of people going past a community-accepted term limit to be minimal.

Suggestion 3: Financially reward organisations/people who stop operations when they are no longer performing effectively

If an organisation realises that it has solved the problem it was founded to solve, or that the solution that it proposed does not have a reasonable chance of solving the problem, and the organisation decides to publicise this fact among the EA community, then two things are likely to happen:

1. They will get significant praise among the EA community for their commitment to rationality and transparency.

2. They will lose all of their funding as nobody wants to fund an ineffective cause.

The first point is good and should be maintained. However, you and your employees can’t pay your rent with praise, so the second point may weigh more heavily on your mind when deciding what decisions to make with regards to requests for funding.

This second point is something that needs to be fixed in order to avoid the accumulation of organisations that have outlived their usefulness. The most obvious way that I can see to do this is to encourage organisations to announce when they no longer consider themselves to be useful, and if any organisations do this and the community agrees with the assessment, for funders to estimate the amount of money they would have given to the organisation over a reasonably long period of time and provide that amount (potentially plus a bonus for honesty) to the board/staff regardless.

On an individual organisation basis, this is not the most effective use of funds, but it is worth doing over the long term in order to create good incentives for the ecosystem as a whole

Comments11


Sorted by Click to highlight new comments since:

Develop a norm against long-term EA projects and long-term employment in EA

That doesn't seem like a good norm to me. 

If the cash transfers were guaranteed for a lifetime, the motivation to make smart decisions is less

That's not analogous. Individuals and organisations aren't guaranteed continued employment/funding - it's conditional on performance. And given continued performance, I don't think there should be a term limit. I think that would threaten to destroy useful projects. Likewise, it would mean that experienced, valuable staff couldn't continue at their org.

Prima facie, the norm against long-term projects and employment sounds quite 'effectiveness/efficiency-decreasing' but it may just be a bias based on limited experience with this option.

Long-term projects, if that is meant as funding renewal security, are  not the norm in EA. Funding is renewed periodically, based on the most competitive opportunities at any time. Any lower marginal costs of established projects' unit output is taken into account in funding new and existing ones.

Long-term paid employment security is greater than that of projects. Organizations may prefer applicants who are willing to work for the org for a considerable time. This can be because the returns of training for that org and relationship-development aspect of some roles.

A scheme where orgs cooperate in both skills training and relationship development can expedite innovation (skills can complement each other) and improve decisionmakers' experiences (they are trusted in resolving problems based on various insights rather than one-sidedly 'lobbied' to make specific decisions).

Non-EA orgs should also be involved, for the development of general skills that could be a suboptimal use of EA-related orgs' time to train and of relationships that can be necessary for some EA-related projects.

Individuals and organisations aren't guaranteed continued employment/funding - it's conditional on performance.

It's conditional on the appearance of performance, which is something else entirely.

For example, academics making a discovery are incentivised to slowly release the results over multiple papers, where it would clearly be much better for the community if the results to be published quickly in a single paper. However, in the first case, there is more appearance of performance.

I think that would threaten to destroy useful projects. Likewise, it would mean that experienced, valuable staff couldn't continue at their org.

I think this argument would have more merit if there weren't already many organisations that do have term limits and have not been destroyed. In many countries, despite having regular performance reviews (elections), even the highest executive positions are subject to term limits.

Develop a norm against long-term EA projects and long-term employment in EA

I'm a bit confused about what you're saying. I think people in EA already switch jobs pretty often*. But often this is to other EA orgs. Are you saying that long-term employment in EA  (in the sense of working in movement EA as opposed to gov't or tech companies or something) should be discouraged? 

*which is not necessarily saying that they're switching jobs often enough.

Yeah, I mean long-term employment in movement EA as a whole, not in any particular org.

Interesting idea, but to be honest, I suspect the cure worse than the disease.

That said, it might make sense for some people who have built up career capital within EA to use this capital to pursue opportunities outside of the movement so as to open up an opportunity for an upcoming EA, but this would only make sense in particular contexts.

What are the arguments/evidence for low social recognition of work outside of EA orgs? 

Working for the government with an EA mindset should be recognized. Some other types of work outside of EA orgs are not well recognized but should be. EA-related opportunities in all non-EA-labeled orgs can be always considered alongside moving to EA-labeled orgs based on marginal value.

For example, if someone works in an area as seemingly unrelated to EA as backend coding for a food delivery app, they can see if they can make an algorithm that makes vegan food more appealing, learn anything generalizable to AI safety that they can share with decisionmakers who would have otherwise not thought of the idea, gain customers by selling hunger banquet tickets, help the company sell their environmental impact through outcompeting electric scooter delivery by purchasing the much more cost-effective Founders Pledge environmental package in bulk, add some catchy discounts for healthy-food alternative to smoking for at-risk youth users, etc - plus donate to different projects which can address important issues - and compare that to their estimate of impact of switching to an EA org (e. g. full-time AI safety research or vegan food advocacy).

for funders to estimate the amount of money they would have given to the organisation over a reasonably long period of time and provide that amount (potentially plus a bonus for honesty) to the board/staff regardless.

Do you think orgs do not bring some evidence to grantmakers in order to gain funding and this would resolve the issue? Depending on the jurisdiction, there may be laws associated with laying off employees, which include salary for several months to enable the person to find employment or government unemployment schemes. Do you think grantmakers make decisions based on perceived employee insecurity rather than cost-effectiveness? What are the decisionmaking processes that make it so that relatively cost-ineffective projects continue to be funded? Should employees of EA-related orgs that do not provide funding and government funding is not available be encouraged to have several months of savings around grant renewal decision times?

What are the arguments/evidence for low social recognition of work outside of EA orgs? 

I don't have any data. But anecdotally:
* When I think of "famous EAs", I tend to think of people who are running/working in EA orgs to the extent that it is difficult to think of people who are not.
* Going to an EAGx, I found that most people that I talked to were connected to an EA org.

Do you think orgs do not bring some evidence to grantmakers in order to gain funding and this would resolve the issue?

Yes, I would expect that orgs bring evidence to grantmakers to gain funding. However, the orgs know the evaluation process after having already participated in it, and know how to optimise their reporting, which puts the grantmakers at a disadvantage.

Depending on the jurisdiction, there may be laws associated with laying off employees, which include salary for several months to enable the person to find employment or government unemployment schemes

There are generally options in these systems to be engaged on fixed-term contracts.

Do you think grantmakers make decisions based on perceived employee insecurity rather than cost-effectiveness?

Not necessarily grantmakers, but quite potentially in the case of individuals within organisations. Ethically, I shouldn't care about my friends' wellbeing more than that of strangers on the other side of the world, but there's not really a switch in my head that I can turn off to make me behave in this manner. Also, with EA being a community, there are social repercussions that can come from making a decision to cut funding to $liked_person that do not come from cutting funding to $bednet_distribution_zone_92.
 

What are the decisionmaking processes that make it so that relatively cost-ineffective projects continue to be funded?

This could probably be another post, and I'd have to do more research to get a complete response. For this post, the main concern was that grantmakers have inaccurate information because people are not incentivised to give it to them. The culture of tying  prestige to the idea of receiving a grant (and with the greater the size of the grant, the greater the prestige) pushes the incentives further in the wrong direction.

Should employees of EA-related orgs that do not provide funding and government funding is not available be encouraged to have several months of savings around grant renewal decision times?

Yes, everyone should be encouraged to do this if they have the means to do so regardless of whether they are an EA or not.

How does the idea of stopping funding for ineffective goals fit:

  • longtermist goals that require actions whose effectiveness cannot be judged in the short-term. You never know if actions toward the goal are effective.
  • goals to prevent existential risk or suffering risk when the absence of the unwanted outcome could have many reasons other than an organization's actions to prevent the outcome.
  • goals that are high value but whose probability of success is low (assuming decisions to support causes use Expected Value), and whose alternative and more likely outcomes have low value. You wouldn't stop funding until the outcome is known, too late to save any resources spent toward the goal.

There are certainly good things you can do where you can't measure the outcomes to work out how effective these are. As a prior, I would say that the fact that an intervention is non-measurable should count against it. If non-measurable effects are regularly accepted, then you will see a lot of organisations claiming non-measurable benefits and there will be no way to reasonably evaluate which ones are providing legitimate value and which ones aren't.

In addition, even if you don't know if your actions will be effective, you should be able to finish doing the actions at some point.

Thanks for your reply. Yes, nonmeasurable effects allow people to claim effects and then get taken at their word, or not. However, measurable effects are easy to fake, what's the saying, something about "lies, damned lies, and statistics"?

Can you imagine plausible scenarios where if all your suggestions were put into practice, the same problems that you aim to avoid still occur?

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would