PeterMcCluskey

715Joined Oct 2014

Bio

I'm a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.

Comments
97

I doubt most claims about sodium causing health problems. High sodium consumption seems quite correlated with dietary choices that have other problems, which makes studying this hard.

See Robin Hanson's comments.

I expect most experts are scared of the political difficulties. Also, many people have been slow to update on the declining costs of solar. I think there's still significant aversion to big energy-intensive projects. Still, it does seem quite possible that experts are rejecting it for good reasons, and it's just hard to find descriptions of their analysis.

I agree very much with your guess that SBF's main mistake was pride.

I still have some unpleasant memories from the 1984 tech stock bubble, of being reluctant to admit that my successes during the bull market didn't mean that I knew how to handle all market conditions.

I still feel some urges to tell the market that it's wrong, and to correct the market by pushing up prices of fallen stocks to where I think they ought to be. Those urges lead to destructive delusions. If my successes had gotten the kind of publicity that SBF got, I expect that I would have made mistakes that left me broke.

I haven't expected EAs to have any unusual skill at spotting risks.

EAs have been unusual at distinguishing risks based on their magnitude. The risks from FTX didn't look much like the risk of human extinction.

I agree that there's a lot of hindsight bias here, but I don't think that tweet tells us much.

My question for Dony is: what questions could we have asked FTX that would have helped? I'm pretty sure I wouldn't have detected any problems by grilling FTX. Maybe I'd have gotten some suspicions by grilling people who'd previously worked with SBF, but I can't think of what would have prompted me to do that.

Nitpick: I suspect EAs lean more toward Objective Bayesianism than Subjective Bayesianism. I'm unclear whether it's valuable to distinguish between them.

It's risky to connect AI safety to one side of an ideological conflict.

Convincing a venue to implement it well (or rewarding one that has already done that) will have benefits that last more than three days.

I agree about the difficulty of developing major new technologies in secret. But you seem to be mostly overstating the problems with accelerating science. E.g.:

These passages seem to imply that the rate of scientific progress is primarily limited by the number and intelligence level of those working on scientific research. Here it sounds like you're imagining that the AI would only speed up the job functions that get classified as "science", whereas people are suggesting the AI would speed up a wide variety of tasks including gathering evidence, building tools, etc.

My understanding of Henrich's model says that reducing cousin marriage is a necessary but hardly sufficient condition to replicate WEIRD affluence.

European culture likely had other features which enabled cooperation on larger-than-kin-network scales. Without those features, a society that stops cousin marriage could easily end up with only cooperation within smaller kin networks. We shouldn't be confident that we understand what the most important features are, much less that we can cause LMICs to have them.

Successful societies ought to be risk-averse about this kind of change. If this cause area is worth pursuing, it should focus on the least successful societies. But those are also the societies that are least willing to listen to WEIRD ideas.

Also, the idea that reduced cousin marriage was due to some random church edict seems to be the most suspicious part of Henrich's book. See The Explanation of Ideology for some claims that the nuclear family was normal in northwest Europe well before Christianity.

Load More