Hide table of contents

In EA and the current funding situation, Will MacAskill tried to enumerate the "risks of commission" that large amounts of EA funding exposed the community to (i.e., ways that extra funding could actually harm EA's impact). Free-spending EA might be a big problem for optics and epistemics raised similar concerns.

The risks described in these posts largely involve either money looking bad to outsiders, or money causing well-intentioned people to think poorly despite their best effort. I think this misses what I'd guess is the biggest risk: the risk that large amounts of funding will attract people who aren't making an effort at all because they don't share EA values, but instead see it as a source of easy money or a target of grift.

Naively, you might think that it's not that much of a problem if (say) 50% of EA funding is eaten by grift—that's only a factor of 2 decrease in effectiveness, which isn't that big in a world of power-law distributions. But in reality, grifters are incentivized to accumulate power and sabotage the movement's overall ability to process information, and many non-grifters find participating in high-grift environments unpleasant and leave. So the stable equilibrium (absent countermeasures) is closer to 100% grift.

The basic mental model

This is something I've thought about, and talked to people about, a fair amount because an analogous grift problem exists in successful organizations, and I would like to help the one I work at avoid this fate. In addition to those conversations, a lot of what I go over here is based on the book Moral Mazes, and I'd recommend reading it (or Zvi Mowshowitz's review/elaboration, which IMO is hyperbolic but directionally correct) for elaboration.

At some point in their growth, most large organizations become extremely ineffective at achieving their goals. If you look for the root cause of individual instances of inefficiency and sclerosis in these orgs, it's very frequently that some manager, or group of managers, was "misaligned" from the overall organization, in that they were trying to do what was best for themselves rather than for the org as a whole, and in fact often actively sabotaging the org to improve their own prospects.

The stable equilibrium for these orgs is to be composed almost entirely of misaligned managers, because:

  • Well-aligned managers prioritize the org's values over their own ascent up the hierarchy (by definition), so will be out-advanced by misaligned managers who prioritize their own ascent above all.
  • Misaligned managers will attempt to sabotage and oust well-aligned managers because their values are harder to predict, so they're more likely to do surprising or dangerous things.
  • Most managers get most of their information from their direct reports, who can sabotage info flows if it would make them look bad. So even if a well-aligned manager has the power to oust a misaligned (e.g.) direct report, they may not realize there's a problem.

For example, a friend described a group inside a name-brand company he worked at that was considered by almost every individual contributor to be extremely incompetent and impossible to collaborate with, largely as a result of poor leadership by the manager. The problem was so bad that when the manager was up for promotion, a number of senior people from outside the group signed a memo to the decision-maker saying that approving the promotion would be a disaster for the company. The manager's promotion was denied that cycle, but approved in the next promotion cycle. In this case, even despite the warning sign of strong opposition from people elsewhere in the company, the promotion decision-maker was fed enough bad information by the manager and allies that he made the wrong call.

Smaller organizations can escape this for a while because information flow is simpler and harder for misaligned managers to sabotage, and because the organization doesn't have enough resources (money or people) to be a juicy target. But as they get larger and better-resourced, they tend to fall into the trap eventually.

The EA movement isn't exactly like a corporation, but I think analogous reasoning applies. Grifters are optimizing only to get themselves money and power; EAs are optimizing for improving the world. So, absent countermeasures, grifters will be better at getting money and power. Grifters prefer working with other grifters who are less likely to expose their grift. And grifters will be incentivized to control and sabotage the flow of information in EA, which will make it increasingly hard to be a non-grifter.

Evidence

The EA community is already showing some early signs of an increase in misalignment:

  • I've heard several people mention hearing third parties say things like "all you have to do is say a few of the right words and you get [insert free stuff here]."
  • I recently spoke to an EA-ish person who received substantial funding from one or more very large EA donor(s). They themselves acknowledged that their case for impact according to the donors' stated values/cause prioritization was tenuous at best. In this case, I think their work will still have an extremely positive impact on the world if it succeeds and could be considered EA by other values, so it's not like the money was wasted, but at least in this case it suggests that the large donor(s) were fairly exploitable.

I have vague recollections of hearing a lot more examples of this but can't reconstruct them well enough to include at the moment because I haven't really been following EA community gossip very closely. I'd encourage people to add their own data points in the comments.

So far, I can recall the EA community expelling one grifter (Intentional Insights). I agree with shlevy's comment on that post:

While I understand the motivation behind it, and applaud this sort of approach in general, I think this post and much of the public discussion I've seen around Gleb are charitable and systematic in excess of reasonable caution.

There's a huge offense-defense asymmetry right now: it's relatively easy for grifters to exploit EA, and sucks up enormous amounts of time for the grift to be conclusively discovered/refuted. If this continues, it's going to be hard for EA to protect itself from the influx of people looking for easy money and power.

Conclusion

I think more funding is still probably great on net, I'm just worried that we're not paying enough attention or acting fast enough on the grift problem.

I wanted to add some suggested ways to mitigate it, but I'm out of time right now and anyway I'm a lot less confident in my solutions to this than in the fact that it's a problem. So maybe discuss potential mitigations in the comments :)

Comments27
Sorted by Click to highlight new comments since: Today at 4:42 PM

I think I'm sympathetic to the criticism but I still feel like EA has sufficiently high hurdles to stop the grifters.
a) It's not like you get a lot of money just by saying the right words. You might be able to secure early funds or funds for a local group but at some point, you will have to show results to get more money.
b) EA funding mechanisms are fast but not loose. I think the meme that you can get money for everything now is massively overblown. A lot of people who are EA aligned didn't get funding from the FTX foundation, OpenPhil or the LTFF. The internal bars for funders still seem to be hard to cross and I expect this to hold for a while. 
c) I'm not sure how the grifters would accumulate power and steer the movement off the rails. Either they start as grifters but actually get good results and then rise to power (at that point they might not be grifters anymore) or they don't get any results and don't rise to power. Overall, I don't see a strong mechanism by which the grifters rise to power without either stopping being grifters or blowing their cover. Maybe you could expand on that.  I think the company analogy that you are making is less plausible in an EA context because (I believe) people update stronger on negative evidence. It's not just some random manager position that you're putting at risk, there are lives at stake. But maybe I'm too naive here. 

Either they start as grifters but actually get good results and then rise to power (at that point they might not be grifters anymore) or they don't get any results and don't rise to power.


I largely agree with this, but I think it's important to keep in mind that "grifter" is not a binary trait. My biggest worry is not that people who are completely unaligned with EA would capture wealth and steer it into the void, but rather that of 10 EA's the one most prone to "grifting" would end up with more influence than the rest.

What makes this so difficult is that the line between 'grifter' and 'skilled at navigating complicated social environments' is pretty thin and the latter is generally a desirable trait.

Generally I'm still not too worried about this, but I do think it's a shame if we end up undervaluing talented people who are less good at 'grifting' resulting in an ineffecient allocation of our human capital.

An example from my own life to illustrate the point: Someone jokingly pointed out to me that if I were to spend a few weeks in Oxford mingling with people, arguing for the importance of EU policy, that would potentially do more to change people's minds than if I were to spend that time writing on the forum.

If this were true (I hope its not!), I don't think that is how people should make up their minds about the importance of cause-areas and I will not participate in such a system. Someone more prone to grifting would and end up with more influence.

if I were to spend a few weeks in Oxford mingling with people, arguing for the importance of EU policy, that would potentially do more to change people's minds than if I were to spend that time writing on the forum.

I also don't know whether this is true, but the general idea that talking to people in person individually would be more persuasive than over text isn't surprising. There's a lower barrier to ideas flowing, you can better see how the other person is responding, and you don't have consider how people not in the conversation might misinterpret you.

This matches my personal experience as well.

Can you give any examples of AI safety organizations that became less able to get funding due to lack of results?

CSER is the obvious example in my mind, and there are other non-public examples.

The flip side is that grift can be an opportunity. Suppose a bunch of members of congress decide EA donors are easy marks and they can get a bunch of money in exchange for backing some weird pandemic prevention bill they don’t even slightly care about or believe in. Well then the bill passes and that’s a good outcome.

That seems like a quite distinct case than what Ben is worrying about - more like a standard commercial interaction, 'buying' pandemic prevention. If I buy a pizza, it makes little difference to me if the cashier is deeply aligned with my dietary and health objectives - all I care about is that he got the toppings right. It is not from the benevolence of the pizza guy that we expect our dinner, but from his regard to his own interest. I think grift would be more like a politician writing a speech to cater to EA donors and then voting for exactly the same things they intended to anyway.

This specific story doesn’t seem to describe the greatest model of EA donors or political influence (it doesn’t seem like EA donors are that pliable or comfortable with politics, and the idea probably boils down to lobbying with extra steps or something).

But the thought seems true?

It seems worth imagining that, the minor media cycle around the recent candidate and other spending could create useful interest. For example, it could get sober attention and policy wonks to talk to EAs.

Since someone just commented privately to me with this confusion, I will state for the record that this commenter seems likely to be impersonating Matt Yglesias, who already has an EA Forum account with the username "Matthew Yglesias." (EDIT: apparently it actually is the same Matt with a different account!)

(Object-level response: I endorse Larks' reply.)

JP Addison
2yModerator Comment5
0
0

This is not true, just a duplicate account issue.

Now merged

Glib grift can grease good gifts

I am confused why the title of this post is: "The biggest risk of free-spending EA is not optics or epistemics, but grift" (emphasis added). As Zvi talks about extensively in his moral mazes sequence, the biggest problems with moral mazes and grifters is that many of their incentives actively point away from truth-seeking behavior and towards trying to create confusing environments in which it is hard to tell who is doing real work and who is not. If it was just the case that a population of 50% grifters and 50% non-grifters would be half as efficient as a population of 0% grifters and 100% non-grifters, that wouldn't be that much of an issue. The problem is that a population of 50% grifters and 50% non-grifters probably has approximately zero ability to get anything done, or react to crises, and practically everyone within that group (including the non-grifters) will have terrible models of the world.

I don't think it's that bad if we end up wasting a lot of resources, compared to what I think is the more likely outcome, which is that the presence of grifters will deteriorate our ability to get accurate information about the world, and build accurate shared models of the world. The key problem is epistemics, and I feel like your post makes that point pretty well, but then it has a title that actively contradicts that point, which feels confusing to me.

Sorry that was confusing! I was attempting to distinguish:

  1. Direct epistemic problems: money causes well-intentioned people to have motivated cognition etc. (the downside flagged by the "optics and epistemics" post)
  2. Indirect epistemic problems as a result of the system's info processing being blocked by not-well-intentioned people

I will try to think of a better title!

Ah, yes, the new title seems better. Thanks for writing this!

I’m not super motivated+available at the moment to do a full write up/analysis, but I’m quite skeptical of the idea that the default/equilibrium in EA would trend towards 100% grift, regardless of whether that is the standard in companies (which I also dispute, although I don’t disagree that as an organization becomes larger self-management becomes increasingly complex—perhaps more complex than can be efficiently handled by humans running on weak ancestral-social hardware).

It might be plausible that “grift” becomes more of a problem, approaching (say) 25% of spending, but there are a variety of strong horizontal (peer-to-peer) and vertical checks on blatant grift, and at some point if someone wants to just thoroughly scam people it seems like it would be more profitable to do it outside of EA.

I’d be happy to see someone else do a more thorough response, though.

Worrying about the percent of spending misses the main problems, e.g. donors who notice the increasing grift become less willing to trust the claims of new organizations, thereby missing some of the best opportunities.

Grifters are definitely a problem in large organizations. The tough thing is that many grifters don’t start out as grifters. They start out honest, working hard, doing their best. But over time, their projects don’t all succeed, and they discover they are still able to appear successful by shading the truth a bit. Little by little, the honest citizen can turn into a grifter.

Many times a grifter is not really malicious, they are just not quite good enough at their job.

Eventually there will be some EA groups or areas that are clearly “not working”. The EA movement will have to figure out how to expel these dysfunctional subgroups.

This post seems to give very low consideration to models of good management or leadership where good values, culture and people flow from outward from a strong center to the movement.

Even if you were entirely pessimistic about newer people, like myself, there’s a large pool of longtime EAs inside or known by these established EA orgs. These people have been close to or deployed large amounts of money for many years.

It seems plausible mundane and normal leadership and hiring by existing institutions could scale up orgs many times with modest dilution in values and alignment, and very little outright “grift”.

Ofer
2y14
0
0

Grifters are optimizing only to get themselves money and power; EAs are optimizing for improving the world.

I think it is not so binary in reality. It's likely that almost no one thinks about themselves as a grifter; and almost everyone in EA is at least somewhat biased towards actions that will cause them to have more money and power (on account of being human). So, while I think this post points at an extremely important problem, I wouldn't use the grifters vs. EAs dichotomy.

It's likely that almost no one thinks about themselves as a grifter

I strongly agree with this, and think it's important to keep in mind

Almost everyone in EA is at least somewhat biased towards actions that will cause them to have more money and power (on account of being human)

I don't think this matches my (very limited) intuition.
I think that there is huge variance in how much different individuals in EA optimize for money/power/prestige. It seems to me that some people really want to "move up the career ladder" in EA orgs, and be the ones that have that precious "impact". While others really want to "do the most good", and would be genuinely happy to have others take decision-making roles if they thought it would lead to more good.

I disagree with your model of human nature. I think I'd agree with you if you instead said

almost everyone in EA is at least somewhat biased towards actions that will cause them to be better at accomplishing their selfish goals (on account of being human).

I think it's valuable to remember that people in EA aren't perfect saints, and have natural human foibles and selfishness. But also humans are not by default either power- or money- maximizers, and in fact some selfish goals are poorly served by power-seeking (e.g. laziness) 

Seems to me that scarcity can also be grift-inducing, e.g. if a tech company only hires the very top performers on its interview, it might find that most hires are people who looked up the questions beforehand and rehearsed the answers. But if the company hires any solid performer, that doesn't induce a rehearsal arms race -- if it's possible to get hired without rehearsing, some people will value their integrity enough to do this.

The CEEALAR model is interesting because it combines a high admission rate with low salaries. You're living with EAs in an undesirable city, eating vegan food, and getting paid peanuts. This seems unattractive to professional grifters, but it might be attractive to deadbeat grifters. Deadbeat grifters seem like a better problem to have since they're less sophisticated and less ambitious on average.

Another CEEALAR thing: living with someone helps you get to know them. It's easier to put up a facade for a funder than for your roommates.

...three conditions that sociologists since the 1950s have considered crucial to making close friends: proximity; repeated, unplanned interactions; and a setting that encourages people to let their guard down and confide in each other, said Rebecca G. Adams, a professor of sociology and gerontology at the University of North Carolina at Greensboro. This is why so many people meet their lifelong friends in college, she added.

Source. When I was at CEEALAR, it seemed to me like the "college dorm" atmosphere was generating a lot of social capital for the EA movement.

I don't think CEEALAR is perfect (and I also left years ago so it may have changed). But the overall idea seems good to iterate on. People have objected in the past because of PR weirdness, but maybe that's what we need to dissuade the most dangerous sort of grifter.

Good post. Interested in ideas for how to guard against this. I notice that some orgs have a strong filter for 'value alignment' when hiring. I guess anti-grift detection should form part of this, but don't know what that looks like.

If misaligned managers tend to increase with organisation age and size, to what extent would keeping orgs separate and (relatively) smaller help defend against this? That is, would we prefer work/funding in a particular cause-area to be distributed amongst several smaller, independent, competing orgs rather than one big super-org? (What if the super-org approach was more efficient?)

Or would EA be so cohesive a movement that even separate orgs function more like departments, such that an analogous slide to misaligned managers happens anyway?

I don't know enough to judge, but my impression is that the big EA orgs have a lot of staff moving between them, and talk to each other a lot. Would we be worried enough by sclerosis that we would intentionally drive for greater independence and separation?

 

Curated and popular this week
Relevant opportunities