[Cross-posted from my website.]

Some rationalists and effective altruists have argued (1, 2, 3) that there is a non-negligible chance that artificial intelligence will attain human or super-human levels of general intelligence very soon.

In this post, I’d like to outline why I’m not convinced that this scenario has non-negligible probability. To clarify, I’m arguing against the hypothesis that “artificial general intelligence (AGI) is 10% likely to be built in the next 10 years”, where AGI is defined as the ability to successfully perform any intellectual task that a human is capable of. (My favoured definition of “AGI” is that autonomous intelligent machines contribute at least 50% to the global economy, as outlined here, but I don’t think the precise definition matters much for purposes of this post.)

The simplest counterargument is to look at the rate of progress we’re seeing so far and extrapolate from that. Have there been any ground-breaking results over the last years? I’m not talking about “normal” results of machine learning papers; I’m talking about milestones that constitute serious progress towards general intelligence. We are surely seeing progress in the former sense – I don’t mean to belittle the efforts of machine learning researchers. (An example of what that I’d consider “ground-breaking” is advanced transfer between different domains, e.g. playing many board or video games well after training on a single game.)

Some people considered AlphaGo (and later AlphaZero) ground-breaking in this sense. But this (the match against Lee Sedol) was in March 2016, so it’s already more than 2 years ago at the time of this writing (late 2018) – and it seems that there haven’t been comparable breakthroughs since then. (In my opinion, AlphaGo wasn't that exceptional anyway – but that’s a topic for another post.)

Conditional on short timelines, I'd expect to observe ground-breaking progress all the time. So that seems to be evidence that this scenario is not materializing. In other words, it seems clear to me that the current rate of progress is not sufficient for AGI in 10 years. (See also Robin Hanson’s AI progress estimate.)

That said, we should distinguish between a) the belief that current rate of progress will lead to AGI within 10 years, and b) the belief that there will be significant acceleration at some point, which will enable AGI within 10 years. One could reject a) and still expect a scenario where AGI arrives within 10 years, but for some reason we won't see impressive results until very near 'the end'. In that case the lack of ground-breaking progress we see now isn’t (strong) evidence.

But why expect that? There's an argument that progress will become discontinuous as soon as recursive self-improvement becomes possible. But we are talking about progress from the status quo to AGI, so that doesn't apply: it seems implausible that artificial intelligences would vastly accelerate progress before they are highly intelligent themselves. (I’m not fully sold on that argument either, but that’s another story for another time.)

Given that significant resources have been invested in AI / ML for quite a while, it seems that discontinuous progress – on the path to AGI, not during or after the transition – would be at odds with usual patterns of technological progress. The reference class I’m thinking of is “improvement of a gradual attribute (like intelligence) of a technology over time, if significant resources are invested”. Examples that come to mind are the maximal speed of cars, which increased steadily over time, or perhaps computing power and memory space, which also progresses very smoothly.

(See also AI Impact’s discontinuous progress investigation. They actually consider new land speed records set by jet-propelled vehicles one of the few cases of (moderate) discontinuities that they’ve found so far. To me that doesn’t feel analogous in terms of the necessary magnitude of the discontinuity, though.)

The point is even stronger if “intelligence” is actually a collection of many distinct skills and abilities rather than a meaningful, unified property (in the context of machine intelligence). In that case it requires progress on many fronts, comparable to the “overall quality” of cars or computer hardware.

It’s possible that progress accelerates simply due to increased interest – and therefore increased funding and other resources – as more people recognise its potential. Indeed, while historical progress in AI was fairly smooth, there may have been some acceleration over the last decade, plausibly due to increased interest. So perhaps that could happen to an even larger degree in the future?

There is, however, already significant excitement (perhaps hype) around AI, so it seems unlikely to me that this could increase the rate of progress by orders of magnitude. In particular, if highly talented researchers are the main bottleneck, you can’t scale up the field by simply pouring more money into it. Plus, it has been argued that the next AI winter is well on its way, i.e. we actually start to see a decline, not a further increase, of interest in AI.

--

One of the most common reasons to nevertheless assign a non-negligible probability – say, 10% – is simply that we’re so clueless about what will happen in the future that we shouldn’t be confident either way, and should thus favor a broad distribution over timelines.

But are we actually that ignorant? It is indeed extremely hard, if not impossible, to predict the specific results of complex processes over long timespans – like, which memes and hashtags will be trending on Twitter in May 2038. However, the plausibility or implausibility of short timelines is not a question of this type since the development of AGI would be the result of a broad trend, not a specific result. We have reasonably strong forms of evidence at our disposal: we can look at historical and current rates of progress in AI, we can consider general patterns of innovation and technological progress, and we can estimate how hard general intelligence is (e.g. whether it’s an aggregation of many smart heuristics vs. a single insight).

Also, what kind of probability should an ignorant prior assign to AGI in 10 years? 10%? But then wouldn’t you assign 10% to advanced nanotechnology in 10 years because of ignorance? What about nuclear risk – we’re clueless about that too, so maybe 10% chance of a major nuclear catastrophe in the next 10 years? 10% on a complete breakdown of the global financial system? But if you keep doing that with more and more things, you’ll end up with near certainty of something crazy happening in the next 10 years, which seems wrong given historical base rates. So perhaps an ignorant prior should actually place much lower probability on each individual event.

--

But perhaps one’s own opinion shouldn’t count for much anyway, and we should instead defer to some set of experts? Unfortunately, interpreting expert opinion is tricky. On the one hand, in some surveys machine learning researchers put non-negligible probability on “human-level intelligence” (whatever that means) in 10 years. On the other hand, my impression from interacting with the community is that the predominant opinion is still to confidently dismiss a short timeline scenario, to the point of not even seriously engaging with it.

Alternatively, one could look at the opinions of smart people in the effective altruism community (“EA experts”), who tend to assign a non-negligible probability to short timelines. But this (vaguely defined) set of people is subject to a self-selection bias – if you think AGI is likely to happen soon, you’re much more likely to spend years thinking and talking about that – and has little external validation of their “expert” status.

A less obvious source of “expert opinion” are the financial markets – because market participants have a strong incentive to get things right – and their implicit opinion is to confidently dismiss the possibility of short timelines.

In any case, it’s not surprising if some people have wrong beliefs about this kind of question. Lots of people are wrong about lots of things. It’s not unusual that communities (like EA or the machine learning community) have idiosyncratic biases or suffer from groupthink. The question is whether more people buy into short timelines compared to what you’d expect conditional on short timelines being wrong (in which case some people will still buy into it, comparable to past AI hypes).

Similarly, do we see fewer or more people buy into short timelines compared to what you’d expect if short timelines are right (in which case there will surely be a few stubborn professors who won’t believe it until the very end)?

I think the answer to the second question is “fewer”. Perhaps the answer to the first question is “somewhat more” but I think that’s less clear.

--

All things considered, I think the probability of a short timeline scenario (i.e. AGI within 10 years) is not more than 1-2%. What am I missing?

22

0
0

Reactions

0
0

More posts like this

Comments14
Sorted by Click to highlight new comments since: Today at 3:35 PM

I think you're right about AGI being very unlikely within the next 10 years. I would note, though, that the OpenPhil piece you linked to predicted at least 10% chance within 20 years, not 10 years (and I expect many people predicting "short timelines" would consider 20 years to be "short"). If you grant 1-2% chance to AGI in 10 years, perhaps that translates to 5-10% within 20 years.

I'm guessing the people you have mind would say that 1-2% of AGI within 10 years is also non-negligible, so the argument for focusing on it still holds (given the enormity of the expected impact).

I think this line of reasoning may be misguided, at least if taken in a particular direction. If the AI Safety community loudly talks about there being a significant chance of AGI within 10 years, then this will hurt the AI Safety community's reputation when 10 years later we're not even close. It's important that we don't come off as alarmists. I'd also imagine that the argument "1% is still significant enough to warrant focus" won't resonate with a lot of people. If we really think the chances in the next 10 years are quite small, I think we're better off (at least for PR reasons) talking about how there's a significant chance of AGI in 20-30 years (or whatever we think), and how solving the problem of safety might take that long, so we should start today.

Makes sense – I think the optics question is pretty separate from the "what's our actual best-guess?" question.

Alternatively, one could look at the opinions of smart people in the effective altruism community (“EA experts”), who tend to assign a non-negligible probability to short timelines. But this (vaguely defined) set of people is subject to a self-selection bias – if you think AGI is likely to happen soon, you’re much more likely to spend years thinking and talking about that – and has little external validation of their “expert” status.

One way of countering this bias is only looking at the opinion of EAs who have thought about this a lot and got into EA from some cause area other than AI safety. My impression is that that group has roughly similar timelines to EAs who were initially focused on AI safety.

Seems like there's still self-selection going on, depending on how much you think 'a lot' is, and how good you are at finding everyone who have thought about it that much. You might be missing out on people who thought about it for, say, 20 hours, decided it wasn't important, and moved on to other cause areas without writing up their thoughts.

On the other hand, it seems like people are worried about and interested in talking about AGI happening in 20 or 30 or 50 years time, so it doesn't seem likely that everyone who thinks 10-year timelines are <10% stops talking about it.

I disagree with your analysis of "are we that ignorant?".

For things like nuclear war or financial meltdown, we've got lots of relevant data, and not too much reason to expect new risks. For advanced nanotechnology, I think we are ignorant enough that a 10% chance sounds right (I'm guessing it will take something like $1 billion in focused funding).

With AGI, ML researchers can be influenced to change their forecast by 75 years by subtle changes in how the question is worded. That suggests unusual uncertainty.

We can see from Moore's law and from ML progress that we're on track for something at least as unusual as the industrial revolution.

The stock and bond markets do provide some evidence of predictability, but I'm unsure how good they are at evaluating events that happen much less than once per century.

Re your first point that it doesn't seem like we're making very fast progress:

People have very different views of how fast we're progressing. There's ultimately no objective measure because for any measure you could log or exponentiate it which also gives a measure of where we are but a very different picture. The evolutionary perspective suggests that if we're moving from insect to lizard abilities in a matter of years, we're going fast. From the 80k podcast with Paul Christiano:

"look at what evolution was able to do with varying amounts of compute. If you look at what each order of magnitude buys you in nature, you’re going from insects to small fish to lizards to rats to crows to primates to humans. Each of those is one order of magnitude, roughly, so you should be thinking of there are these jumps. It is the case that the different between insect and lizard feels a lot smaller to us and is less intuitive significance than the difference between primate and human or crow and primate"

Although this is usually used to argue for moderate to fast takeoff, it also favors of ignorance, i.e. at best, just don't know how fast we're progressing because this contradicts our intuitive sense.

FYI I can't confirm your observation that AI researchers don't believe in short timelines.

Greg Brockman makes a case for short timelines in this presentation: Can we rule out near-term AGI? My prior is that deep neural nets are insufficient for AGI, and the increasing amounts of compute necessary to achieve breakthroughs reveals limits of the paradigm, but Greg's outside view argument is hard to dismiss.

A few points.

First, market predictions for the intermediate term are mostly garbage. (Intermediate being 3-5 years.) It gets much much less predictive after that. The market's ability to predict is constrained by the investment time-frame of most investors, fundamental constraints on how noisy the market is, and other factors. But given all of that, the ridiculous valuation of tech firms - uber, twitter, etc. not to mention the crazy P/E rations for google, amazon, etc. seems to imply that the market thinks something important will happen there,

Second, I don't think you're defining the timelines question clearly. (Neither is anyone else.) One version is; "conditional on a moderately fast takeoff, when would we need to have solved the alignment problem before in order to prevent runaway value misalignment." Another is "regardless of takeoff speed, when will a given AI surpass the best performance by all humans across every domain." A third is "when will all AI systems overall be able to do more than any one single person." A fourth is "when will a specific AI be able to do more than one average person." And lastly, "when will people stop finding strange edge cases to argue that AI isn't yet more capable than humans, despite outperforming it on nearly every task"

I could see good arguments for 10 years as 10% probable for questions one and three. I think that most AI experts are thinking of something akin to questions two and four when they say 50 years. And I see good arguments that there is epsilon probability of question five in the next century.

the crazy P/E rations for google, amazon, etc. seems to imply that the market thinks something important will happen there,

Google's forward PE is 19x, vs the S&P500 on 15x. What's more, this is overstated, because it involves the expensing of R&D, which logically should be capitalised. Facebook is even cheaper at 16x, though if I recall correctly that excludes stock-based-comp expense.

I agree that many other tech firms have much more priced into their valuations, and that fundamental analysts in the stock market realistically only look 0-3 years out.

What is the actionable difference between "1-2 per cent" and "10 per cent" predictions? If we knew that an asteroid is coming to Earth and it will hit the Earth with one of these probabilities, how our attempts to diverge it would depend on the probability of the impact?

Should we ignore 1 per cent probability, but go all-in in preventing 10 per cent probability?

If there is no difference in actions, the difference in probability estimates is rather meaningless.

You can extend your argument to even smaller probabilities: how much effort should go into this if we think the chance is 0.1%? 0.01? Or in the other direction, 50%, 90%, etc. In extremes it's very clear that this should affect how much focus we put into averting it, and I don't think there's anything special about 1% vs 10% in this regard.

Another way of thinking about it is that AI is not the only existential risk. If your estimate for AI is 1% in the next ten years but pandemics is 10%, vs 10% for AI and 1% for pandemics, then that should also affect where you think people should focus.

Yes, it is clear. My question was: "Do we have any specific difference in mind about AI strategies for 1 per cent in 10 years vs. 10 per cent in 10 years cases?" If we going to ignore the risk in both cases, there is no difference is it 1 per cent or 10 per cent.

I don't know any short-term publically available strategy for the 10 years case, no matter what is the probability.