These cases are also interesting for alignment agreements between AI labs, and it's interesting to see it playing out in practice. Cullen wrote about this here much better than I will.
Roughly speaking, if individual consumers would prefer use a riskier AI (because costs are externalized) then it seems like an agreement to make AI safer-but-more-expensive would run afoul of the same principles as this chicken-welfare agreement.
On paper, there are some reasons that the AI alignment case should be easier than the chicken-welfare case: (i) using unsafe AI hurts non-customer humans, and AI customers care more about other humans than they do about chickens, (ii) deploying unaligned AI actually likely hurts other AI customers in particular (since they will be the main ones competing with the unaligned but more sophisticated AI). So it's likely that every individual AI customer would benefit.
Unfortunately, it seems like the same thing could be true in the chicken case---every individual customer could prefer the world with the welfare agreement---and it wouldn't change the regulator's decision.
For example, suppose that Dutch consumers eat 100 million chickens a year, 10/year for each of 10 million customers. Customer surveys discover that customers would only be willing to pay $0.01 for a chicken to have more space and a slightly longer life, but that these reforms increase chicken prices by $1. So they strike down the reform.
But with welfare standards in place, each customer pays an extra $10/year for chicken and 100 million chickens have improved lives, with a cost per chicken of less than $0.0000001/chicken, thousands of times lower than their WTP. (This is the same dynamic described here.) So every chicken consumer prefers the world where the standards are in place, despite not being willing to pay money to improve the lives of the tiny number of chickens they eat personally. This seems to be a very common reaction to discussions of animal welfare ("what difference does my consumption make? I can't change the way most chickens are treated...")
Because the number of chicken-eaters is so large, the relevant question in the survey should be "Would you prefer that someone else pay $X in order to improve chicken welfare?", making a tradeoff between two strangers. That's the relevant question for them, since the welfare standards mostly affect other people.
Analogously, if you ask AI consumers "Would you prefer have an aligned AI, or a slightly more sophisticated unaligned AI?" they could easily all say "I want the more sophisticated one," even if every single human would be better off if there were an agreement to make only aligned AI. If an anti-trust regulator used the same standard as in this case, it seems like they would throw out an alignment agreement because of that, even knowing that it would make every single human worse off.
I still think in practice AI alignment agreements would be fine for a variety of reasons. For example, I think if you ran a customer survey it's likely people would say they prefer use aligned AI even if it would disadvantage them personally because public sentiment towards AI is very different and the regulatory impulse is stronger. (Though I find it hard to believe that anything would end up hinging on such a survey, and even more strongly I think it would never come to this because there would be much less political pressure to enforce anti-trust.)
I guess I wouldn't recommend the donor lottery to people who wouldn't be happy entering a regular lottery for their charitable giving
Strong +1.
If I won a donor lottery, I would consider myself to have no obligation whatsoever towards the other lottery participants, and I think many other lottery participants feel the same way. So it's potentially quite bad if some participants are thinking of me as an "allocator" of their money. To the extent there is ambiguity in the current setup, it seems important to try to eliminate that.
(I agree with Max Daniel below that I don't think that Nordhaus' methodology is inherently more trustworthy. I think it's dealing with a relatively small amount of pretty short-term data, and is generally using a much more opinionated model of what technological change would look like.)
The relevant section is VII. Summarizing the six empirical tests:
I would group these into two basic classes of evidence:
I'd agree that these seem like two points of evidence against singularity-soon, and I think that if I were going on outside-view economic arguments I'd probably be <50% singularity by 2100. (Though I'd still have a meaningful probability soon, and even at 100 years the prospect of a singularity would be one of the most important facts about the basic shape of the future.)
There are some more detailed aspects of the model that I don't buy, e.g. the very high share of information capital and persistent slow growth of physical capital. But I don't think they really affect the bottom line.
If the market can't price 30-year cashflows, it can't price anything, since for any infinitely-lived asset (eg stocks!), most of the present-discounted value of future cash flows is far in the future.
If an asset pays me far in the future, then long-term interest rates are one factor affecting its price. But it seems to me that in most cases that factor still explains a minority of variation in prices (and because it's a slowly-varying factor it's quite hard to make money by predicting it).
For example, there is a ton of uncertainty about how much money any given company is going to make next year. We get frequent feedback signals about those predictions, and people who win bets on them immediately get returns that let them show how good they are and invest more, and so that's the kind of case where I'd be more scared of outpredicting the market.
So I guess that's saying that I expect the relative prices of stocks to be much more efficient than the absolute level.
See eg this Ralph Koijen thread and linked paper, "the first 10 years of dividends only make up ~20% of the value of the stock market. 80% is due to value of cash flows beyond 10 years"
Haven't looked at the claim but it looks kind of misleading. Dividend yield for SPY is <2% which I guess is what they are talking about? But buyback yield is a further 3%, and with a 5% yield you're getting 40% of the value in the first 10 years, which sounds more like it. So that would mean that you've gotten half of the value within 13.5 years instead of 31 years.
Technically the stock is still valued based on the future dividends, and a buyback is just decreasing outstanding shares and so increasing earnings per share. But for the purpose of pricing the stock it should make no difference whether earnings are distributed as dividends or buybacks, so the fact that buybacks push cashflows to the future can't possibly affect the difficulty of pricing stocks.
Put a different way, the value of a buyback to investors doesn't depend on the actual size of future cashflows, nor on the discount rate. Those are both cancelled out because they are factored into the price at which the company is able to buy back its shares. (E.g. if PepsiCo was making all of its earnings in the next 5 years, and ploughing them into buybacks, after which they made a steady stream of not-much-money, then PepsiCo prices would still be equal to the NPV of dividends, but the current PepsiCo price would just be an estimate of earnings over the next 5 years and would have almost no relationship to long-term interest rates.)
Even if this is right it doesn't affect your overall point too much though, since 10-20 year time horizons are practically as bad as 30-60 year time horizons.
I I think the market just doesn't put much probability on a crazy AI boom anytime soon. If you expect such a boom then there are plenty of bets you probably want to make. (I am personally short US 30-year debt, though it's a very small part of my AI-boom portfolio.)
I think it's very hard for the market to get 30-year debt prices right because the time horizons are so long and they depend on super hard empirical questions with ~0 feedback. Prices are also determined by supply and demand across a truly huge number of traders, and making this trade locks up your money forever and can't be leveraged too much. So market forecasts are basically just a reflection of broad intellectual consensus about the future of growth (rather than views of the "smart money" or anything), and the mispricing is just a restatement of the fact that AI-boom is a contrarian position.
Some scattered thoughts (sorry for such a long comment!). Organized in order rather than by importance---I think the most important argument for me is the analogy to computers.
Scaling down all the amounts of time, here's how that situation sounds to me: US output doubles in 15 years (basically the fastest it ever has), then doubles again in 7 years. The end of the 7 year doubling is the first time that your hypothetical observer would say "OK yeah maybe we are transitioning to a new faster growth mode," and stuff started getting clearly crazy during the 7 year doubling. That scenario wouldn't be surprising to me. If that scenario sounds typical to you then it's not clear there's anything we really disagree about.
Moreover, it seems to contradict your claim that 0.14% growth was already high by historical standards.
0.14%/year growth sustained over 500 years is a doubling. If you did that between 5000BC and 1000AD then that would be 4000x growth. I think we have a lot of uncertainty about how much growth actually occurred but we're pretty sure it's not 4000x (e.g. going from 1 million people to 4 billion people). Standard kind of made-up estimates are more like 50x (e.g. those cited in Roodman's report), half that fast.
There is lots of variance in growth rates, and it would temporarily be above that level given that populations would grow way faster than that when they have enough resources. That makes it harder to tell what's going on but I think you should still be surprised to see such high growth rates sustained for many centuries.
(assuming you discount 1350 as I do as an artefact of recovering from various disasters
This doesn't seem to work, especially if you look at the UK. Just consider a long enough period of time (like 1000AD to 1500AD) to include both the disasters and the recovery. At that point, disasters should if anything decrease growth rates. Yet this period saw historically atypically fast growth.
Some thoughts on the historical analogy:
If you look at the graph at the 1700 mark, GWP is seemingly on the same trend it had been on since antiquity. The industrial revolution is said to have started in 1760, and GWP growth really started to pick up steam around 1850. But by 1700 most of the Americas, the Philippines and the East Indies were directly ruled by European powers
I think European GDP was already pretty crazy by 1700. There's been a lot of recent arguing about the particular numbers and I am definitely open to just being wrong about this, but so far nothing has changed my basic picture.
After a minute of thinking my best guess for finding the most reliable time series was from the Maddison project. I pulled their dataset from here.
Here's UK population:
A 0.14%/year growth rate was already very fast by historical standards, and by 1700 things seemed really crazy.
Here's population in Spain:
The 1500-1700 acceleration is less marked here but still seems like growth was fast.
Here's the world using the data we've all been using in the past (which I think is much more uncertain):
This puts the 0.14%/year growth in the UK in context, and also suggests that things were generally blowing up by 1700AD.
I think that looking at the country-level data is probably better since it's more robust, unless your objection is "GWP isn't what matters because some countries' GDP will be growing much faster."
Is your impression that if customers were willing to pay for it, then that wouldn't be sufficient cause to say that it benefited customers? (Does that mean that e.g. a standard ensuring that children's food doesn't cause discomfort also can't be protected, since it benefits customers' kids rather than customers themselves?)