I highly recommend reading the whole post, but I found Part V particularly good, which I have copied in it's entirety below.
V.
Do I sound defensive about this? I’m not. This next one is defensive.
I’m part of the effective altruist movement. The biggest disaster we ever faced was the Sam Bankman-Fried thing. Some lessons people suggested to us then were:
- Be really quick to call out deceptive behavior from a hotshot CEO, even if you don’t yet have the smoking gun.
- It was crazy that FTX didn’t even have a board. Companies need strong boards to keep them under control.
- Don’t tweet through it! If you’re in a horrible scandal, stay quiet until you get a great lawyer and they say it’s in your best interests to speak.
- Instead of trying to play 5D utilitarian chess, just try to do the deontologically right thing.
People suggested all of these things, very loudly, until they were seared into our consciousness. I think we updated on them really hard.
Then came the second biggest disaster we faced, the OpenAI board thing, where we learned:
- Don’t accuse a hotshot CEO of deceptive behavior unless you have a smoking gun; otherwise everyone will think you’re unfairly destroying his reputation.
- Overly strong boards are dangerous. Boards should be really careful and not rock the boat.
- If a major news story centers around you, you need to get your side out there immediately, or else everyone will turn against you.
- Even if you are on a board legally charged with “safeguarding the interests of humanity”, you can’t just speak out and try to safeguard the interests of humanity. You have to play savvy corporate politics or else you’ll lose instantly and everyone will hold you in contempt.
These are the opposite lessons as the FTX scandal.
I’m not denying we screwed up both times. There’s some golden mean, some virtue of practical judgment around how many red flags you need before you call out a hotshot CEO, and in what cases you should do so. You get this virtue after looking at lots of different situations and how they turned out.
You definitely don’t get this virtue by updating maximally hard in response to a single case of things going wrong. If you do that, you’ll just fling yourself all the way into the opposite failure mode. And then when you fail again the opposite time, you’ll fling yourself back into the original failure mode, and yo-yo back and forth forever.
The problem with the US response to 9-11 wasn’t just that we didn’t predict it. It was that, after it happened, we were so surprised that we flung ourselves to the opposite extreme and saw terrorists behind every tree and around every corner. Then we made the opposite kind of failure (believing Saddam was hatching terrorist plots, and invading Iraq).
The solution is not to update much on single events, even if those events are really big deals.
Focusing just on the quoted text, I'm not sure "happy medium" is the right message to take from these two incidents. AI and blockchain involve two entirely different ways of thinking about risk control.
AI risk involves frequent events with undefined causes, whereas a digital currency collapse is a rare event with overdetermined causes. For the first you would need lots of communication in order to establish a logical sequence, whereas the second requires carefully controlled communications in order to prevent false logic from taking hold.