It's been at least a few months since the last proper EA scandals, and we're now desperately trying to squeeze headlines out of the past ones.

On the contrary, a few scandals have been wrapped up:

  1. SBF was sentenced to 25 years in prison
  2. The investigation regarding Owen Cotton-Barratt presented its findings
  3. Whytham Abbey is being sold

Indeed, even OpenPhil's involvement in the Whytham Abbey sale shows they're now less willing to fund new scandals.

Therefore it seems to me that EA is now neither funding- nor talent-constrained, but rather scandal-constrained.

This cannot go on. We've all become accustomed to a neverending stream of scandals, and if that stream dwindles, we might find ourselves bored to death - or worse, the world might stop talking about EA all the time.

I therefore raise a few ideas for discussion - feel free to add your own:

  1. EA Funds should open a new Scandal Fund to create a continuous supply.
  2. CEA's community health team should hire a person to look harder for scandals lying under the surface.
  3. Nick Bostrom should publish a book.
  4. EA should work harder on encouraging group housing of people with their bosses, preferably in secluded areas abroad.

232

6
1
6
3

Reactions

6
1
6
3
Comments22


Sorted by Click to highlight new comments since:
David_Moss
119
17
0
10
2

Our data suggests that the highest impact scandals are several times more impactful than other scandals (bear in mind that this data is probably not capturing the large number of smaller scandals). 

If so, it seems plausible we should optimise for the very largest scandals, rather than simply producing a large volume of less impactful scandals.

Looks like someone should attempt a pivotal act. If you think you might be the right person for the job - you probably are!

Thank you! This is the kind of important work EA must now strive for.

  1. Manifest should blow up in some unexpected way.

  2. Elon Musk should announce he is giving all his money to EA causes.

  3. EA should fund SBFs appeal process

  4. Will Mac Askill should launch a new cryptocurrency "AskCoin", where rich people buy large amounts of the cryptocurrency for the poorest people on earth, driving up the value.

Love it

9.  EA should publicly support Israel's war effort
10. Buy a large coal mine and employ the world's poorest people
11. Only fund community builders who say they are longtermist
12. Publish the secret deal with Huel as EA's main sponsor

MaxRa
53
10
0
4
4

Meal replacement companies were there for us, through thick and slightly less thick.

https://queal.com/ea

(btw, I find it funny that I cringe internally more about posting 11 than about 9)

I don't see how 12 would sink us lol, but the other 3 for sure.

They probably have a large influence on prioritization. I'd check into ALLFED

Donating to SBF's appeal process may be the highest impact charity we have ever seen.

In randomized controlled trials from 2022, SBF had donated over 130 million dollars in less than a year, and a successful appeal would counterfactually create this benefit for 25 years. An expensive criminal trial in the US can cost as much as $15,000. Even if $15k increases the odds of winning the appeal by 0.1%, that is still an expected 217x amplification of every dollar donated.

The money amplified goes in to effective charities like GiveWell, so if we use GiveWell's one life saved per $4,500 measure, donating to SBF's appeal fund would save a life for every 20 dollars.

This is just a back of the napkin calculation, so my numbers might be off a little, but this seems to be the most effective charity by *many* orders of magnitude.
 

Does anyone know where he has been funding all his defense expenses from, and how much firepower is left there? If you'd merely be funging with his own assets, the Bank of Mom & Dad, or a D&O insurance policy, the giving would be rather ineffective.

He has to have been paying the bills, else it is unlikely new sentencing stage counsel would have signed up.

If trial counsel were somewhat competent, they already know what their best chances on appeal would be. Appeals are done on the record generated below, so the marginal returns to extra $ are likely minimal beyond a certain point.

Elon Musk? So last year... 2024 is time for Trump scandals.
Let's buy some Truth shares and produce new scandals!

Scandals don't just happen in the vacuum. You need to create the right conditions for them. So I suggest:

  1. We spread concern about the riskiness of all altruistic action so that conscientious people (who are often not sufficiently scandal-prone) self-select out of powerful positions and open them up to people with more scandal potential.
  2. We encourage more scathing ad-hom attacks on leadership so that those who take any criticism to heart self-select out of leadership roles.
  3. We make these positions more attractive to scandal-prone people by abandoning cost-effectiveness analyses and instead base strategy and grantmaking on vibes and relationships.
  4. We further improve the cushiness of these positions by centralizing power and funding around them to thwart criticism and prevent Hayekian diversity and experimentation.
  5. We build stronger relationships with powerful, unscrupulous people and companies by, e.g., helping them with their hiring.
  6. We emphasize in-person networking and move the most valuable networks to some of the most expensive spots in the world. That way access to the network comes with even greater dependency on centralized funding, making it easier to control.

[Meta: I'm not claiming anyone is doing these things on purpose! It would be nice, though, if more people were trying to counter these risk factors for scandals and generally bad epistemics.]

Scandals don't just happen in the vacuum

Has anyone tested this? Because if we could create them in a vacuum, that might save a lot of energy usually lost to air resistance, and thus be more effective

Even scandal-prone individuals can't survive in a vacuum. (You may be thinking of sandals, not scandals?)

Is it definitely established that a living person is required for every scandal?

Only half a person per sandal I think!

you can totally have scandals involving dead or imaginary people. So, definitely no.

Right? Also you can have a person turn on the scandal machine, which then creates more than one scandal associated with them.

  1. We make these positions more attractive to scandal-prone people by abandoning cost-effectiveness analyses and instead base strategy and grantmaking on vibes and relationships imaginary Bayesian updates.

FTFY

If this post resonates with you, consider traveling back in time a few days and submitting an application for CEA's open Head of Communications role! (Remember, EA is first and foremost a do-ocracy, so you really need to be the change you wish to see around here)

Off-topic: When presenting the first part of the post on the front page, we get "The investigation regarding Owen Cotton". Might be better for hyphenated terms to be all-or-nothing when being cut off like this?

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI