Hide table of contents

"It is difficult to get a man to understand something when his salary depends upon his not understanding it.”

The best outcome for a charity is for the problem that it aims at fixing to be finished. But for all of the people involved with the charity, who gain value from their involvement (most clearly money, but also things like utils from being part of a community), an ultimate solution to the problem that the charity tries to fix would not be a good thing. They would have to find a new source of money/utils. We might expect that EAs would be less likely to fall victim to this trap, but in real life most EAs have other priorities than EA, so the potential to (most likely unconsciously) avoid solutions that are too effective is still present.

This isn’t an issue which admits easy answers, but that isn’t a reason to avoid the problem entirely. There are still things we can do to try to better align incentives.

Suggestion 1: Improve social recognition of work outside of EA organisations

EA should not be about the money, and we don’t want people to leave the community when they can no longer make money out of it. To help make sure that this is the case, we should look at ways that people can be socially rewarded for doing effective work outside of EA organisations.

For example, there are many EAs who work in government, and there is general acknowledgement within the community that in many cases the opportunities available in these positions simply do not exist outside of governments. Furthermore, these opportunities are generally not dependent on private EA funding to exist; the main value that the EA community can provide to people in this position is support through advice/suggestions for impact. Despite this acknowledgement, there are very few people government workers who one could classify as high-status EAs.

Put another way, an employee of an EA organisation producing unsolicited advice for government about pandemic prevention would likely have greater EA status than a government employee working on pandemic prevention policy.

A similar thing could be said for EAs working on personal projects that don’t require them to be part of an organisation or receive funding at all. Organisations require administrative paperwork and overhead, and if you can do something yourself without this, then that should be the preferred option.

Suggestion 2: Develop a norm against long-term EA projects and long-term employment in EA

As EAs, we know that people who receive cash transfers (e.g. GiveDirectly) generally do smart things with them. However, in almost all cases these are time-limited transfers. (Even if participants are told that the transfers will last for the rest of their lives, they would have good reason to doubt whether the organisation making this promise will be in a position to follow through with this 30 years into the future, if it exists at all.) If the cash transfers were guaranteed for a lifetime, the motivation to make smart decisions is less – if I make bad decisions with this month’s money, it’s not that important because I’ll get one next month, and every month after that.

The same analogy applies to time. Work expands to fill the time available. If I only have a limited time to do good, and I want to do as much good as I can, I will work really hard during that time to maximise the good that I can do with the limited opportunity that I have been given. If I can reasonably expect to have a long career in EA, or if my EA project can reasonably expect to indefinitely receive funding, then there is no urgency, and I can take my time and become complacent and inefficient in doing good.

This is not a good outcome for EA and a natural way to solve this problem is term limits, which are a feature of many government and community associations.

Term limits within EA would only apply to organisations applying for EA funding and individuals applying for employment with EA organisations. Individuals’ participation in EA as a community should not be limited by time in any circumstances.

The term limits would not (and logistically could not) be formal, but would operate more on the level of community expectations – if a funder receives an application from an organisation which has continually asked for funding for the previous (e.g.) 10 years, they should consider not refunding unless exceptional circumstances exist, and organisations could apply a similar process for employee recruitment.

We certainly don’t want to lose people and organisations who are performing difficult to replace work, which is why exceptions should always be an option. However, having irreplaceable people is an organisational weakness and should be minimised where possible (e.g. by documenting procedures and encouraging knowledge sharing). Executed properly, one would expect the number of people going past a community-accepted term limit to be minimal.

Suggestion 3: Financially reward organisations/people who stop operations when they are no longer performing effectively

If an organisation realises that it has solved the problem it was founded to solve, or that the solution that it proposed does not have a reasonable chance of solving the problem, and the organisation decides to publicise this fact among the EA community, then two things are likely to happen:

1. They will get significant praise among the EA community for their commitment to rationality and transparency.

2. They will lose all of their funding as nobody wants to fund an ineffective cause.

The first point is good and should be maintained. However, you and your employees can’t pay your rent with praise, so the second point may weigh more heavily on your mind when deciding what decisions to make with regards to requests for funding.

This second point is something that needs to be fixed in order to avoid the accumulation of organisations that have outlived their usefulness. The most obvious way that I can see to do this is to encourage organisations to announce when they no longer consider themselves to be useful, and if any organisations do this and the community agrees with the assessment, for funders to estimate the amount of money they would have given to the organisation over a reasonably long period of time and provide that amount (potentially plus a bonus for honesty) to the board/staff regardless.

On an individual organisation basis, this is not the most effective use of funds, but it is worth doing over the long term in order to create good incentives for the ecosystem as a whole

16

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since:

Develop a norm against long-term EA projects and long-term employment in EA

That doesn't seem like a good norm to me. 

If the cash transfers were guaranteed for a lifetime, the motivation to make smart decisions is less

That's not analogous. Individuals and organisations aren't guaranteed continued employment/funding - it's conditional on performance. And given continued performance, I don't think there should be a term limit. I think that would threaten to destroy useful projects. Likewise, it would mean that experienced, valuable staff couldn't continue at their org.

Prima facie, the norm against long-term projects and employment sounds quite 'effectiveness/efficiency-decreasing' but it may just be a bias based on limited experience with this option.

Long-term projects, if that is meant as funding renewal security, are  not the norm in EA. Funding is renewed periodically, based on the most competitive opportunities at any time. Any lower marginal costs of established projects' unit output is taken into account in funding new and existing ones.

Long-term paid employment security is greater than that of projects. Organizations may prefer applicants who are willing to work for the org for a considerable time. This can be because the returns of training for that org and relationship-development aspect of some roles.

A scheme where orgs cooperate in both skills training and relationship development can expedite innovation (skills can complement each other) and improve decisionmakers' experiences (they are trusted in resolving problems based on various insights rather than one-sidedly 'lobbied' to make specific decisions).

Non-EA orgs should also be involved, for the development of general skills that could be a suboptimal use of EA-related orgs' time to train and of relationships that can be necessary for some EA-related projects.

Individuals and organisations aren't guaranteed continued employment/funding - it's conditional on performance.

It's conditional on the appearance of performance, which is something else entirely.

For example, academics making a discovery are incentivised to slowly release the results over multiple papers, where it would clearly be much better for the community if the results to be published quickly in a single paper. However, in the first case, there is more appearance of performance.

I think that would threaten to destroy useful projects. Likewise, it would mean that experienced, valuable staff couldn't continue at their org.

I think this argument would have more merit if there weren't already many organisations that do have term limits and have not been destroyed. In many countries, despite having regular performance reviews (elections), even the highest executive positions are subject to term limits.

Develop a norm against long-term EA projects and long-term employment in EA

I'm a bit confused about what you're saying. I think people in EA already switch jobs pretty often*. But often this is to other EA orgs. Are you saying that long-term employment in EA  (in the sense of working in movement EA as opposed to gov't or tech companies or something) should be discouraged? 

*which is not necessarily saying that they're switching jobs often enough.

Yeah, I mean long-term employment in movement EA as a whole, not in any particular org.

Interesting idea, but to be honest, I suspect the cure worse than the disease.

That said, it might make sense for some people who have built up career capital within EA to use this capital to pursue opportunities outside of the movement so as to open up an opportunity for an upcoming EA, but this would only make sense in particular contexts.

What are the arguments/evidence for low social recognition of work outside of EA orgs? 

Working for the government with an EA mindset should be recognized. Some other types of work outside of EA orgs are not well recognized but should be. EA-related opportunities in all non-EA-labeled orgs can be always considered alongside moving to EA-labeled orgs based on marginal value.

For example, if someone works in an area as seemingly unrelated to EA as backend coding for a food delivery app, they can see if they can make an algorithm that makes vegan food more appealing, learn anything generalizable to AI safety that they can share with decisionmakers who would have otherwise not thought of the idea, gain customers by selling hunger banquet tickets, help the company sell their environmental impact through outcompeting electric scooter delivery by purchasing the much more cost-effective Founders Pledge environmental package in bulk, add some catchy discounts for healthy-food alternative to smoking for at-risk youth users, etc - plus donate to different projects which can address important issues - and compare that to their estimate of impact of switching to an EA org (e. g. full-time AI safety research or vegan food advocacy).

for funders to estimate the amount of money they would have given to the organisation over a reasonably long period of time and provide that amount (potentially plus a bonus for honesty) to the board/staff regardless.

Do you think orgs do not bring some evidence to grantmakers in order to gain funding and this would resolve the issue? Depending on the jurisdiction, there may be laws associated with laying off employees, which include salary for several months to enable the person to find employment or government unemployment schemes. Do you think grantmakers make decisions based on perceived employee insecurity rather than cost-effectiveness? What are the decisionmaking processes that make it so that relatively cost-ineffective projects continue to be funded? Should employees of EA-related orgs that do not provide funding and government funding is not available be encouraged to have several months of savings around grant renewal decision times?

What are the arguments/evidence for low social recognition of work outside of EA orgs? 

I don't have any data. But anecdotally:
* When I think of "famous EAs", I tend to think of people who are running/working in EA orgs to the extent that it is difficult to think of people who are not.
* Going to an EAGx, I found that most people that I talked to were connected to an EA org.

Do you think orgs do not bring some evidence to grantmakers in order to gain funding and this would resolve the issue?

Yes, I would expect that orgs bring evidence to grantmakers to gain funding. However, the orgs know the evaluation process after having already participated in it, and know how to optimise their reporting, which puts the grantmakers at a disadvantage.

Depending on the jurisdiction, there may be laws associated with laying off employees, which include salary for several months to enable the person to find employment or government unemployment schemes

There are generally options in these systems to be engaged on fixed-term contracts.

Do you think grantmakers make decisions based on perceived employee insecurity rather than cost-effectiveness?

Not necessarily grantmakers, but quite potentially in the case of individuals within organisations. Ethically, I shouldn't care about my friends' wellbeing more than that of strangers on the other side of the world, but there's not really a switch in my head that I can turn off to make me behave in this manner. Also, with EA being a community, there are social repercussions that can come from making a decision to cut funding to $liked_person that do not come from cutting funding to $bednet_distribution_zone_92.
 

What are the decisionmaking processes that make it so that relatively cost-ineffective projects continue to be funded?

This could probably be another post, and I'd have to do more research to get a complete response. For this post, the main concern was that grantmakers have inaccurate information because people are not incentivised to give it to them. The culture of tying  prestige to the idea of receiving a grant (and with the greater the size of the grant, the greater the prestige) pushes the incentives further in the wrong direction.

Should employees of EA-related orgs that do not provide funding and government funding is not available be encouraged to have several months of savings around grant renewal decision times?

Yes, everyone should be encouraged to do this if they have the means to do so regardless of whether they are an EA or not.

How does the idea of stopping funding for ineffective goals fit:

  • longtermist goals that require actions whose effectiveness cannot be judged in the short-term. You never know if actions toward the goal are effective.
  • goals to prevent existential risk or suffering risk when the absence of the unwanted outcome could have many reasons other than an organization's actions to prevent the outcome.
  • goals that are high value but whose probability of success is low (assuming decisions to support causes use Expected Value), and whose alternative and more likely outcomes have low value. You wouldn't stop funding until the outcome is known, too late to save any resources spent toward the goal.

There are certainly good things you can do where you can't measure the outcomes to work out how effective these are. As a prior, I would say that the fact that an intervention is non-measurable should count against it. If non-measurable effects are regularly accepted, then you will see a lot of organisations claiming non-measurable benefits and there will be no way to reasonably evaluate which ones are providing legitimate value and which ones aren't.

In addition, even if you don't know if your actions will be effective, you should be able to finish doing the actions at some point.

Thanks for your reply. Yes, nonmeasurable effects allow people to claim effects and then get taken at their word, or not. However, measurable effects are easy to fake, what's the saying, something about "lies, damned lies, and statistics"?

Can you imagine plausible scenarios where if all your suggestions were put into practice, the same problems that you aim to avoid still occur?

Curated and popular this week
Relevant opportunities