Tom_Davidson

Wiki Contributions

Comments

Report on Whether AI Could Drive Explosive Economic Growth

Great question!

I would read Appendix G as conditional on "~no civilizational collapse (from any cause)", but not conditional on "~no AI-triggered fundamental reshaping of society that unexpectedly prevents growth". I think the latter would be incorporated in "an unanticipated bottleneck prevents explosive growth".

Some thoughts on David Roodman’s model of economic growth and its relation to AI timelines

I think the question of GDP measurement is a big deal here. GDP deflators determine what counts as "economic growth" compared to nominal price changes, but deflators don't really know what to do with new products that didn't exist. What was the "price" of an iPhone in 2000? Infinity? Could this help recover Roodman's model? If ideas being produced end up as new products that never existed before, could that mean that GDP deflators should be "pricing" these replacements as massively cheaper, thus increasing the resulting "real" growth rate?

This is an interesting idea. It wasn't a focus of my work, but my loose impression is that when economists have attempted to correct for these kinds of problems the resulting adjustment isn't nearly large enough to make Roodman's model consistent with the recent data. Firstly, measurements of growth in the 1700s and 1800s face the same problem, so it's far from clear that the adjustment would raise recent growth relative to old growth (which is what Roodman's model would need). Secondly, I think that when economists have tried to measure willingness to pay for 'free' goods like email and social media, the willingness is not high enough to make a huge difference to GDP growth.

Some thoughts on David Roodman’s model of economic growth and its relation to AI timelines

Thank you for this comment! I'll make reply to different points in different comments.

But then the next point seems very clear: there's been tons of population growth since 1880 and yet growth rates are not 4x 1880 growth rates despite having 4x the population. The more people -> more ideas thing may or may not be true, but it hasn't translated to more growth.

So if AI is exciting because AIs could start expanding the number of "people" or agents coming up with ideas, why aren't we seeing huge growth spurts now?

The most plausible models have diminishing returns to efforts to generate new ideas. In these models, you need an exponentially growing population to sustain exponential growth. So these models aren't surprised that growth hasn't increased since 1880.

At the same time, these same models imply that if increasing output causes the population to increase (more output -> more people), then there can be super-exponential growth. This is because the population can grow super-exponentially with this feedback loop.

So my overall opinion is that it's 100% consistent to think:

  1. The increased population of the last 100 years didn't lead to faster growth
  2. If AGI means that more output -> more people, growth will accelerate.
Report on Whether AI Could Drive Explosive Economic Growth

Hey - interesting question! 

This isn't something I looked into in depth, but I think that if AI drives explosive economic growth then you'd probably see large rises in both absolute energy use and in energy efficiency.

Energy use might grow via (e.g.) massively expanding solar power to the world's deserts (see this blog from Carl Shulman). Energy efficiency might grow via replacing human  workers with AIs (allowing services to be delivered with less energy input), rapid tech progress further increasing the energy efficiency of existing goods and services, the creation of new valuable products that use very little energy (e.g. amazing virtual realities), or in other ways. 

Report on Semi-informative Priors for AI timelines (Open Philanthropy)

Thanks for these thoughts! You raise many interesting points.

 On footnote 16, you "For example, the application of Laplace’s law described below implies that there was a 50% chance of AGI being developed in the first year of effort". But historically, participants in the Dartmouth conference were gloriously optimistic

I'm not sure whether the participants at Dartmouth would have assigned 50% to creating AGI within a year and >90% within a decade, as implied by the Laplace prior. But either way I do think these probabilities would have been too high. It's very rare, perhaps unprecedented, for such transformative tech progress to be made with so little effort. Even listing some of the best examples of quick and dramatic tech progress, I found the average time for a milestone to be achieved was >50 years, and the list omits the many failed projects.

That said, I agree that the optimism before Dartmouth is some reason to use a high first-trial probability (though I don't think as high as 50%).

 

The point that Laplace's prior depends on the unit of time chosen is really interesting, but it ends up not mattering once a bit of time has passed.

Agreed! (Interestingly, it only doesn't matter once enough time has passed that Laplace strongly expects AGI to have already happened.) Still, Laplace's predictions about the initial years of effort do depend on the trial definition: defining a 'trial' as 1 day, 1 year, or 30 years gives very different results. I think this shows something is wrong with the rule more generally. The root of the problem is that that Laplace assigns 50% probability of the first trial succeeding no matter how we define a trial. I think my alternative rule, where you choose the trial definition and the first-trial probability in tandem, addresses this issue.

 

 If you rule out AGI until 2028 (as you do in your report), the Laplace prior gives you 1 - (1-[1/(2028-1956)+1])^(2036-2028) ≈ 10.4% ≈ 10%, which is well withing your range of 1% to 18%, and really near to your estimate of 8%

My estimate of 8% only rules out AGI by the end of 2020. If I rule out AGI by the end of 2028, it becomes ~4%. This is quite a lot smaller than the 10% from Laplace.

The top of my range would be 9%, which is close to Laplace. However, this high-end is driven by forecasting that the inputs to AI R&D will grow faster than their historical average, so more trials occur per year. I don't think such high values would be reasonable without taking these forecasts into account.

 

When you write "I also find that pr(AGI by 2036) from Laplace’s law is too high," what outside-view consideration are you basing that on? Also, is it really too high?

I find it too low mostly because it follows from aggressive assumptions about the chance of success in the first few years of effort, but also because of the reference classes discussed in the report.

Another way to justify ruling out Laplace is that if you had a hyper-prior, putting some weight on Laplace and some on more conservative rules, you would put extremely little weight on Laplace by now. (Although I personally wouldn't put much weight on Laplace even in an initial hyper-prior.)

There's a counter-intuitive example that illustrates this hyper-prior behaviour nicely. Suppose you assigned 20% to "AGI impossible" and 80% to another prior. If the other prior is Laplace, then your weight on "AGI impossible" rises to 92% by 2020, and you only assign 8% to Laplace. Your pr(AGI by 2036) is 1.6%. By contrast, if you reduce the first-trial probability in Laplace down to 1/100 then your weight on "AGI impossible" only rises to 29% by 2020 and your pr(AGI by 2036) is 6.3%. So having a lower first-trial probability ends up increasingpr(AGI by 2036).

 

It is not clear to me that by adjusting the Laplace prior down when you categorize AGI as a "highly ambitious but feasible technology" you are not updating twice

This is an interesting idea, thanks. I think  the description "highly ambitious" would have been appropriate in 1956: AGI would allow automation of ~all labour. In addition, it did seem hard to me to find reference classes supporting first-trial probability values above 1/50, and some reference classes I looked into suggest lower values.

That said, it's possible that my favoured range for the first-trial probability [1/100, 1/1000] was influenced by my knowledge that we failed to develop AGI. If so, this would have made the range too conservative.

Report on Semi-informative Priors for AI timelines (Open Philanthropy)

Agreed - the framework can be applied to things other than AGI.

The ITN framework, cost-effectiveness, and cause prioritisation

Thanks for this Halstead - thoughtful article.

I have a one push-back, and one question about your preferred process for applying the ITN framework.

1. After explaining the 80K formalisation of ITN you say

Thus, once we have information on importance, tractability and neglectedness (thus defined), then we can produce an estimate of marginal cost-effectiveness.
The problem with this is: if we can do this, then why would we calculate these three terms separately in the first place?

I think the answer is that in some contexts it's easier to calculate each term separately and then combine them in a later step, than to calculate the cost-effectiveness directly. It's also easier to sanity check that each term looks sensible separately, as our intuitions are often more reliable for the separate terms than for the marginal cost effectiveness.

Take technical AI safety research as an example. I'd have trouble directly estimating "How much good would we do by spending $1000 in this area", or sanity checking the result. I'd also have trouble with "What % of this problem would we solve by spending another $100?" (your preferred definition of tractability). I'd feel at least somewhat more confident making and eye-balling estimates for

  • "How good would it be to solve technical AI safety?"
  • "How much of the problem would we solve by doubling the amount of money/researchers in this area (or increasing it by 10%)?"
  • "How much is being spent in the area?"

I do think the tractability estimate is the hardest to construct and assess in this case, but I think it's better than the alternatives. And if we assume diminishing marginal returns we can make the tractability estimate easier by replacing it with "How many resources would be needed to completely solve this problem?"

So I think the 80K formalisation is useful in at least some contexts, e.g. AI safety.


2. In the alternative ITN framework of the Founders Pledge, neglectedness is just one input to tractability. But then you score each cause on i) the ratio importance/neglectedness, and ii) all the factors bearing on tractability except neglectedness. To me, it feels like (ii) would be quite hard to score, as you have to pretend you don't know things that you do know (neglectedness).

Wouldn't it be easier to simply score each cause on importance and tractability, using neglectedness as one input to the tractability score? This has the added benefit of not assuming diminishing marginal returns, as you can weight neglectedness less strongly when you don't think there are DMR.

Am I an Effective Altruist for moral reasons?

I found Nakul's article v interesting too but am surprised at what it led you to conclude.

I didn't think the article was challenging the claim that doing paradigmatic EA activities was moral. I thought Nakul was suggesting that doing them wasn't obligatory, and that the consequentialist reasons for doing them could be overridden by an individual's projects, duties and passions. He was pushing against the idea that EA can demand that everyone support them.

It seems like your personal projects would lead to do EA activities. So I'm surprised you judge EA activities to be less moral than alternatives. Which activities and why?

I would have expected you to conclude something like "Doing EA activities isn't morally required of everyone; for some people it isn't the right thing to do; but for me it absolutely is the right thing to do".

Load More