Hide table of contents

I highly recommend reading the whole post, but I found Part V particularly good, which I have copied in it's entirety below.

V.

Do I sound defensive about this? I’m not. This next one is defensive.

I’m part of the effective altruist movement. The biggest disaster we ever faced was the Sam Bankman-Fried thing. Some lessons people suggested to us then were:

  • Be really quick to call out deceptive behavior from a hotshot CEO, even if you don’t yet have the smoking gun.
  • It was crazy that FTX didn’t even have a board. Companies need strong boards to keep them under control.
  • Don’t tweet through it! If you’re in a horrible scandal, stay quiet until you get a great lawyer and they say it’s in your best interests to speak.
  • Instead of trying to play 5D utilitarian chess, just try to do the deontologically right thing.

People suggested all of these things, very loudly, until they were seared into our consciousness. I think we updated on them really hard.

Then came the second biggest disaster we faced, the OpenAI board thing, where we learned:

  • Don’t accuse a hotshot CEO of deceptive behavior unless you have a smoking gun; otherwise everyone will think you’re unfairly destroying his reputation.
  • Overly strong boards are dangerous. Boards should be really careful and not rock the boat.
  • If a major news story centers around you, you need to get your side out there immediately, or else everyone will turn against you.
  • Even if you are on a board legally charged with “safeguarding the interests of humanity”, you can’t just speak out and try to safeguard the interests of humanity. You have to play savvy corporate politics or else you’ll lose instantly and everyone will hold you in contempt.

These are the opposite lessons as the FTX scandal.

I’m not denying we screwed up both times. There’s some golden mean, some virtue of practical judgment around how many red flags you need before you call out a hotshot CEO, and in what cases you should do so. You get this virtue after looking at lots of different situations and how they turned out.

You definitely don’t get this virtue by updating maximally hard in response to a single case of things going wrong. If you do that, you’ll just fling yourself all the way into the opposite failure mode. And then when you fail again the opposite time, you’ll fling yourself back into the original failure mode, and yo-yo back and forth forever.

The problem with the US response to 9-11 wasn’t just that we didn’t predict it. It was that, after it happened, we were so surprised that we flung ourselves to the opposite extreme and saw terrorists behind every tree and around every corner. Then we made the opposite kind of failure (believing Saddam was hatching terrorist plots, and invading Iraq).

The solution is not to update much on single events, even if those events are really big deals.

Comments3


Sorted by Click to highlight new comments since:

I downvoted this forum post because I think the quoted part of the text, while obviously informal, is an annoying strawman of criticisms EA faced and represents an attitude towards critique that I think is quite counterproductive. I think the rest of the linked post is significantly better though, and agree with the general point. 

Focusing just on the quoted text, I'm not sure "happy medium" is the right message to take from these two incidents. AI and blockchain involve two entirely different ways of thinking about risk control.

AI risk involves frequent events with undefined causes, whereas a digital currency collapse is a rare event with overdetermined causes. For the first you would need lots of communication in order to establish a logical sequence, whereas the second requires carefully controlled communications in order to prevent false logic from taking hold. 

I am quite uncertain about my reaction to this but I think I have seen evidence to at least weakly support my claims.

  1. I agree that I think the FTX situation was not as bad as the community's reactions might have warranted. For example, I think I saw some evidence that quite shortly after the FTX scandal, EA was back to growth rates quite similar to those seen before the influx of FTX funding (I might be wrong here so please let me know if I recall incorrectly!).
  2. I think perhaps FTX have detracted from one or perhaps two much more significant issues, to which we have reacted much less than to FTX: The lack of gender and racial diversity in the movement. This feels somewhat analogous to what we in EA criticize the charity sector for: Reacting much more strongly to natural disasters and wars than the ongoing, not-news-worthy crises such as deaths due to malaria.
  3. I would not be surprised if comparable organizations/communities/movements to EA have better "diversity metrics". This is because at the core of it, I see EA as a project of care and collaboration, something that should attract many more people than white men. I think perhaps diversity metrics from entities such as the UN might be something to use as a benchmark here.
  4. This is bad not only because we are currently a much smaller movement than we could have been, but perhaps more importantly because our growth rate might be consistently much lower than it could have been.
  5. There have been several "crises" regarding gender and race, but beyond these crises I think there is something more subtle around culture, messaging, etc.
More from bern
52
bern
· · 5m read
Curated and popular this week
 ·  · 1m read
 · 
 ·  · 5m read
 · 
The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace. However, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1]   Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding several categories of work (my own categories, not theirs): * Many Republican-leaning think tanks, such as the Foundation for American Innovation. * “Post-alignment” causes such as digital sentience or regulation of explosive growth. * The rationality community, including LessWrong, Lightcone, SPARC, CFAR, MIRI. * High school outreach, such as Non-trivial. In addition, they are currently not funding (or not fully funding): * Many non-US think tanks, who don’t want to appear influenced by an American organisation (there’s now probably more than 20 of these). * They do fund technical safety non-profits like FAR AI, though they’re probably underfunding this area, in part due to difficulty hiring for this area the last few years (though they’ve hired recently). * Political campaigns, since foundations can’t contribute to them. * Organisations they’ve decided are below their funding bar for whatever reason (e.g. most agent foundations work). OP is not infallible so some of these might still be worth funding. * Nuclear security, since it’s on average less cost-effective than direct AI funding, so isn’t one of the official cause areas (though I wouldn’t be surprised if there were some good opportunities there). This means many of the organisations in these categories have only been able to access a a minority of the available philanthropic capital (in recent history, I’d guess ~25%). In the recent SFF grant round, I estimate they faced a funding bar 1.5 to 3 times higher. This creates a lot of opportunities for other donors
LewisBollard
 ·  · 5m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- Progress for factory-farmed animals is far too slow. But it is happening. Practices that once seemed permanent — like battery cages and the killing of male chicks — are now on a slow path to extinction. Animals who were once ignored — like fish and even shrimp — are now finally seeing reforms, by the billions. It’s easy to gloss over such numbers. So, as you read the wins below, I encourage you to consider each of these animals as an individual. A hen no longer confined to a cage, a chick no longer macerated alive, a fish no longer dying a prolonged death. I also encourage you to reflect on the role you and your fellow advocates and funders played in these wins. I’m inspired by what you’ve achieved. I hope you will be too. 1. About Cluckin’ Time. Over 1,000 companies globally have now fulfilled their pledges to go cage-free. McDonald’s implemented its pledge in the US and Canada two years ahead of schedule, sparing seven million hens from cages. Subway implemented its pledge in Europe, the Middle East, Oceania, and Indonesia. Yum Brands, owner of KFC and Pizza Hut, reported that for 25,000 of its restaurants it is now 90% cage-free. These are not cheap changes: one UK retailer, Lidl, recently invested £1 billion just to transition part of its egg supply chain to free-range. 2. The Egg-sodus: Cracking Open Cages. In five of Europe’s seven biggest egg markets — France, Germany, Italy, the Netherlands, and the UK — at least two-thirds of hens are now cage-free. In the US, about 40% of hens are — up from a mere 6% a decade ago. In Brazil, where large-scale cage-free production didn’t exist a decade ago, about 15% of hens are now cage-free. And in Japan, where it still barely exists, the nation’s largest egg buyer, Kewpi