I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff over there.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
Good note. Also worth keeping in mind the base rate of companies going under. FTX committing massive fraud was weird; but a young, fast-growing, unprofitable company blowing up was decidedly predictable, and IMO the EA community was banking too hard on FTX money being real.
Plus the planning fallacy, i.e., if someone says they want to do something by some date, then it'll probably happen later than that.
My off-the-cuff guess is
The responsible thing to do is to go look at the balance of what experts in a field are saying, and in this case, they're fairly split
This is not a crux for me. I think if you were paying attention, it was not hard to be convinced that AI extinction risk was a big deal in 2005–2015, when the expert consensus was something like "who cares, ASI is a long way off." Most people in my college EA group were concerned about AI risk well before ML experts were concerned about it. If today's ML experts were still dismissive of AI risk, that wouldn't make me more optimistic.
SF and Berkeley and south bay (San Jose/Palo Alto area) all have pretty different climates. Going off my memory:
It's true that SF is usually cloudy but that's not the case for the whole bay area. Berkeley/Oakland is sunny more often than not.
"EA" isn't one single thing with a unified voice. Many EAs have indeed denounced OpenAI.
As an EA: I hereby denounce OpenAI. They have greatly increased AI extinction risk. The founding of OpenAI is a strong candidate for the worst thing to happen in history (time will tell whether this event leads to human extinction).
I'm forecasting a 0.00% to 0.02% probability range for AGI by the end of 2034, and that if I were to make 100 predictions of a similar kind, more than 95 of them would have the "correct" probability range
I kinda get what you're saying but I think this is double-counting in a weird way. A 0.01% probability means that if you make 10,000 predictions of that kind, then about one of them should come true. So your 95% confidence interval sounds like something like "20 times, I make 10,000 predictions that each have a probability between 0.00% and 0.02%; and 19 out of 20 times, about one out of the 10,000 predictions comes true."
You could reduce this to a single point probability. The math is a bit complicated but I think you'd end up with a point probability on the order of 0.001% (~10x lower than the original probability). But if I understand correctly, you aren't actually claiming to have a 0.001% credence.
I think there are other meaningful statements you could make. You could say something like, "I'm 95% confident that if I spend 10x longer studying this question, then I would end up with a probability between 0.00% and 0.02%."
Didn't Sam tell several straightforward lies? e.g. claiming that they had enough assets to fully cover the account values of all users, which they didn't; claiming that Alameda never borrowed users' deposits, which it did.