P

PeterMcCluskey

811 karmaJoined

Bio

I'm a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.

Comments
117

One good prediction that he made was in his 1986 book Engines of Creation, that a global hypertext system would be available within a decade. Hardly anyone in 1986 imagined that.

But he has almost entirely stopped trying to predict when technologies will be developed. You should read him to imagine what technologies are possible.

That's mostly bearish for bonds because it increases inflation.

I haven't given that a lot of thought. AI is likely to have the strongest effects further out. A year ago I was mainly betting on interest rates going up around 2030 via SOFR futures, because I expected interest rates to go down in 2025-6. But now I'm guessing there's little difference in which durations go up.

These ETFs seem better than leveraged ETFs, for reasons related to the excessive trading by leveraged ETFs.

I see multiple reasons why bonds are likely to be bad investments over the next few years:

  • AI is likely to drive up real interest rates, by making capital more productive.
  • AI-induced job loss might cause the Fed to be less concerned about inflation.
  • AI-induced job loss may reduce tax revenues, so the government will need to sell more bonds.
  • Trump is pressuring the Fed to adopt policies that would cause inflation.
  • If AI doesn't increase GDP growth, there are increasing doubts about whether the US is responsible enough to keep servicing its debt.
  • In the past couple of months, some commodities have shown price surges that look more like a harbinger of inflation than like stable monetary conditions. See copper, silver, DRAM, and lithium.

Markets may be efficiently pricing a few of these risks, but I'm pretty sure they're underestimating AI.

I've been shorting t-bond futures, currently 6% of my net worth, and I'm likely to short more soon.

Oysters are significantly more nutrient dense than beef, partly because we eat the whole oyster, but ignore the most nutritious parts of the cow. So $1 of oyster is roughly as beneficial as $1 of pasture-raised beef. Liver from grass-fed cows is likely better than bivalves, and has almost no effect on how many cows are killed.

My experience suggests that there probably isn't much that you can do.

most of the billions of people who know nothing about AI risks have a p(doom) of zero.

This seems pretty false. E.g. see this survey.

The strongest concern I have heard to this approach is the fact that as model algorithms improve, at some point it is possible to train and build human-level intelligence on anyone’s home laptop, which makes hardware monitoring and restricting trickier. While this is cause for concern, I don’t think this should distract us from pursuing a pause.

There are many ways to slow AI development, but I'm concerned that it's misleading to label any of them as pauses. I doubt that the best policies will be able to delay superhuman AI by more than a couple of years.

A strictly enforced compute threshold seems like it would slow AI development by something like 2x or 4x. AI capability progress would continue via distributed training, and by increasing implementation efficiency.

Slowing AI development is likely good if the rules can be enforced well enough. My biggest concern is that laws will be carelessly written, with a result that most responsible AI labs obey their spirit, but that the least responsible lab will find loopholes to exploit.

That means proposals should focus carefully on trying to imagine ways that AI labs could evade compliance with the regulations.

I've been buying Alexandre's eggs. Should I switch to the Berkeley Bowl brand pasture-raised eggs? Do you have any other recommendations for eggs?

Load more