if they had explained why their views were not moved by the expert reviews OpenPhil has already solicited.
I included responses to each review, explaining my reactions to it. What kind of additional explanation were you hoping for?
Davidson 2021 on semi-informative priors received three reviews.
By my judgment, all three made strong negative assessments, in the sense (among others) that if one agreed with the review, one would not use the report's reasoning to inform decision-making in the manner advocated by Karnofsky (and by Beckstead).
For Hajek&Strasser's and Halpern’s reviews, I don't think "strong negative assessment" is supported by your quotes. The quotes focus on things like 'the reported numbers are too precise' and 'we should use more than a single probability measure' rather than whether the estimate is too high or too low overall or whether we should be worrying more vs less about TAI. I also think the reviews are more positive overall than you imply, e.g. Halpern's review says "This seems to be the most serious attempt to estimate when AGI will be developed that I’ve seen"
Davidson 2021 on explosive growth received many reviews... Two of them made strong negative assessments.
I agree that these two reviewers assign much lower probabilities to explosive growth than I do (I explain why I continue to disagree with them in my responses to their reviews). Again though, I think these reviews are more positive overall than you imply, e.g. Jones states that the report "is balanced, engaging a wide set of viewpoints and acknowledging debates and uncertainties... is also admirably clear in its arguments and in digesting the literature... engages key ideas in a transparent way, integrating perspectives and developing its analysis clearly and coherently." This is important as it helps us move from "maybe we're completely missing a big consideration" to "some experts continue to disagree for certain reasons, but we have a solid understanding of the relevant considerations and can hold our own in a disagreement".
Thanks for this!
I won't address all of your points right now, but I will say that I hadn't considered that "R&D is compensating for natural resources becoming harder to extract over time", which would increase the returns somewhat. However, my sense is that raw resource extraction is a small % of GDP, so I don't think this effect would be large.
Sorry for the slow reply!
I agree you can probably beat this average by aiming specifically at R&D for boosting economic growth.
I'd be surprised if you could spend $100s millions per year and consistently beat the average by a large amount (>5X) though:
Another relevant point is that some interventions increase R&D inputs in a non-targeted, or weakly targeted, way. E.g. high-skill immigration to the US or increasing government funding for broad R&D pots. The 'average R&D' number seems particularly useful for these interventions.
Great question!
I would read Appendix G as conditional on "~no civilizational collapse (from any cause)", but not conditional on "~no AI-triggered fundamental reshaping of society that unexpectedly prevents growth". I think the latter would be incorporated in "an unanticipated bottleneck prevents explosive growth".
I think the question of GDP measurement is a big deal here. GDP deflators determine what counts as "economic growth" compared to nominal price changes, but deflators don't really know what to do with new products that didn't exist. What was the "price" of an iPhone in 2000? Infinity? Could this help recover Roodman's model? If ideas being produced end up as new products that never existed before, could that mean that GDP deflators should be "pricing" these replacements as massively cheaper, thus increasing the resulting "real" growth rate?
This is an interesting idea. It wasn't a focus of my work, but my loose impression is that when economists have attempted to correct for these kinds of problems the resulting adjustment isn't nearly large enough to make Roodman's model consistent with the recent data. Firstly, measurements of growth in the 1700s and 1800s face the same problem, so it's far from clear that the adjustment would raise recent growth relative to old growth (which is what Roodman's model would need). Secondly, I think that when economists have tried to measure willingness to pay for 'free' goods like email and social media, the willingness is not high enough to make a huge difference to GDP growth.
Thank you for this comment! I'll make reply to different points in different comments.
But then the next point seems very clear: there's been tons of population growth since 1880 and yet growth rates are not 4x 1880 growth rates despite having 4x the population. The more people -> more ideas thing may or may not be true, but it hasn't translated to more growth.
So if AI is exciting because AIs could start expanding the number of "people" or agents coming up with ideas, why aren't we seeing huge growth spurts now?
The most plausible models have diminishing returns to efforts to generate new ideas. In these models, you need an exponentially growing population to sustain exponential growth. So these models aren't surprised that growth hasn't increased since 1880.
At the same time, these same models imply that if increasing output causes the population to increase (more output -> more people), then there can be super-exponential growth. This is because the population can grow super-exponentially with this feedback loop.
So my overall opinion is that it's 100% consistent to think:
Great suggestion - thanks! Have edited.
Hey - interesting question!
This isn't something I looked into in depth, but I think that if AI drives explosive economic growth then you'd probably see large rises in both absolute energy use and in energy efficiency.
Energy use might grow via (e.g.) massively expanding solar power to the world's deserts (see this blog from Carl Shulman). Energy efficiency might grow via replacing human workers with AIs (allowing services to be delivered with less energy input), rapid tech progress further increasing the energy efficiency of existing goods and services, the creation of new valuable products that use very little energy (e.g. amazing virtual realities), or in other ways.
Thanks for these thoughts! You raise many interesting points.
On footnote 16, you "For example, the application of Laplace’s law described below implies that there was a 50% chance of AGI being developed in the first year of effort". But historically, participants in the Dartmouth conference were gloriously optimistic
I'm not sure whether the participants at Dartmouth would have assigned 50% to creating AGI within a year and >90% within a decade, as implied by the Laplace prior. But either way I do think these probabilities would have been too high. It's very rare, perhaps unprecedented, for such transformative tech progress to be made with so little effort. Even listing some of the best examples of quick and dramatic tech progress, I found the average time for a milestone to be achieved was >50 years, and the list omits the many failed projects.
That said, I agree that the optimism before Dartmouth is some reason to use a high first-trial probability (though I don't think as high as 50%).
The point that Laplace's prior depends on the unit of time chosen is really interesting, but it ends up not mattering once a bit of time has passed.
Agreed! (Interestingly, it only doesn't matter once enough time has passed that Laplace strongly expects AGI to have already happened.) Still, Laplace's predictions about the initial years of effort do depend on the trial definition: defining a 'trial' as 1 day, 1 year, or 30 years gives very different results. I think this shows something is wrong with the rule more generally. The root of the problem is that that Laplace assigns 50% probability of the first trial succeeding no matter how we define a trial. I think my alternative rule, where you choose the trial definition and the first-trial probability in tandem, addresses this issue.
If you rule out AGI until 2028 (as you do in your report), the Laplace prior gives you 1 - (1-[1/(2028-1956)+1])^(2036-2028) ≈ 10.4% ≈ 10%, which is well withing your range of 1% to 18%, and really near to your estimate of 8%
My estimate of 8% only rules out AGI by the end of 2020. If I rule out AGI by the end of 2028, it becomes ~4%. This is quite a lot smaller than the 10% from Laplace.
The top of my range would be 9%, which is close to Laplace. However, this high-end is driven by forecasting that the inputs to AI R&D will grow faster than their historical average, so more trials occur per year. I don't think such high values would be reasonable without taking these forecasts into account.
When you write "I also find that pr(AGI by 2036) from Laplace’s law is too high," what outside-view consideration are you basing that on? Also, is it really too high?
I find it too low mostly because it follows from aggressive assumptions about the chance of success in the first few years of effort, but also because of the reference classes discussed in the report.
Another way to justify ruling out Laplace is that if you had a hyper-prior, putting some weight on Laplace and some on more conservative rules, you would put extremely little weight on Laplace by now. (Although I personally wouldn't put much weight on Laplace even in an initial hyper-prior.)
There's a counter-intuitive example that illustrates this hyper-prior behaviour nicely. Suppose you assigned 20% to "AGI impossible" and 80% to another prior. If the other prior is Laplace, then your weight on "AGI impossible" rises to 92% by 2020, and you only assign 8% to Laplace. Your pr(AGI by 2036) is 1.6%. By contrast, if you reduce the first-trial probability in Laplace down to 1/100 then your weight on "AGI impossible" only rises to 29% by 2020 and your pr(AGI by 2036) is 6.3%. So having a lower first-trial probability ends up increasingpr(AGI by 2036).
It is not clear to me that by adjusting the Laplace prior down when you categorize AGI as a "highly ambitious but feasible technology" you are not updating twice
This is an interesting idea, thanks. I think the description "highly ambitious" would have been appropriate in 1956: AGI would allow automation of ~all labour. In addition, it did seem hard to me to find reference classes supporting first-trial probability values above 1/50, and some reference classes I looked into suggest lower values.
That said, it's possible that my favoured range for the first-trial probability [1/100, 1/1000] was influenced by my knowledge that we failed to develop AGI. If so, this would have made the range too conservative.
Thanks for these great questions Ben!
To take them point by point: