I'm a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.
Something important seems missing from this approach.
I see many hints that much of this loneliness results from trade-offs made by modern Western culture, neglecting (or repressing) tightly-knit local community ties to achieve other valuable goals.
My sources for these hints are these books:
One point from WEIRDest People is summarized here:
Neolocal residence occurs when a newly married couple establishes their home independent of both sets of relatives. While only about 5% of the world's societies follow this pattern, it is popular and common in urban North America today largely because it suits the cultural emphasis on independence.
Can Western culture give lower priority to independence while retaining most of the benefits of WEIRD culture?
Should we expect to do much about loneliness without something along those lines?
AI seems likely to have some impact on loneliness. Can we predict and speed up the good impacts?
Most Westerners underestimate the importance of avoiding loneliness. But I'm confused as to how we should do something about that.
I doubt most claims about sodium causing health problems. High sodium consumption seems quite correlated with dietary choices that have other problems, which makes studying this hard.
See Robin Hanson's comments.
I expect most experts are scared of the political difficulties. Also, many people have been slow to update on the declining costs of solar. I think there's still significant aversion to big energy-intensive projects. Still, it does seem quite possible that experts are rejecting it for good reasons, and it's just hard to find descriptions of their analysis.
I agree very much with your guess that SBF's main mistake was pride.
I still have some unpleasant memories from the 1984 tech stock bubble, of being reluctant to admit that my successes during the bull market didn't mean that I knew how to handle all market conditions.
I still feel some urges to tell the market that it's wrong, and to correct the market by pushing up prices of fallen stocks to where I think they ought to be. Those urges lead to destructive delusions. If my successes had gotten the kind of publicity that SBF got, I expect that I would have made mistakes that left me broke.
I haven't expected EAs to have any unusual skill at spotting risks.
EAs have been unusual at distinguishing risks based on their magnitude. The risks from FTX didn't look much like the risk of human extinction.
I agree that there's a lot of hindsight bias here, but I don't think that tweet tells us much.
My question for Dony is: what questions could we have asked FTX that would have helped? I'm pretty sure I wouldn't have detected any problems by grilling FTX. Maybe I'd have gotten some suspicions by grilling people who'd previously worked with SBF, but I can't think of what would have prompted me to do that.
Nitpick: I suspect EAs lean more toward Objective Bayesianism than Subjective Bayesianism. I'm unclear whether it's valuable to distinguish between them.
It's risky to connect AI safety to one side of an ideological conflict.
Convincing a venue to implement it well (or rewarding one that has already done that) will have benefits that last more than three days.
6 months sounds like a guess as to how long the leading companies might be willing to comply.
The timing of the letter could be a function of when they were able to get a few big names to sign.
I don't think they got enough big names to have much effect. I hope to see a better version of this letter before too long.