EG

Erich_Grunewald

1734 karmaJoined Dec 2020Working (6-15 years)Berlin, Germanywww.erichgrunewald.com

Bio

Though I'm employed by Rethink Priorities, anything I write here is written purely on my own behalf (unless otherwise noted).

Comments
215

I'm curious: What do you think are the rough odds that invasion of Taiwan increases the likelihood of TAGI by 2043?

Maybe 20% that it increases the likelihood? Higher if war starts by 2030 or so, and near 0% if it starts in 2041 (but maybe >0% if it starts in 2042?). What number would you put on it, and how would you update your model if that number changed?

However, we feel somewhat more comfortable with our predictions prior to scaled, cheap AGI. Like, if it takes 3e30 - 3e35 operations to train an early AGI, then I don't think we can condition on that AGI accelerating us towards construction of the resources needed to generate 3e30 - 3e35 operations. It would be putting the cart before the horse.

What we can (and try to) condition on are potential predecessors to that AGI; e.g., improved narrow AI or expensive human-level AGI. Both of those we have experience with today, which gives us more confidence that we won't get an insane productivity explosion in the physical construction of fabs and power plants.

I think what you're saying here is, "yes, we condition on such a world, but even in such a world these things won't be true for all of 2023-2043, but mainly only towards the latter years in that range". Is that right?

I agree to some extent, but as you wrote, "transformative AGI is a much higher bar than merely massive progress in AI": I think in a lot of those previous years we'll still have AI doing lots of work to speed up R&D and carry out lots of other economically useful tasks. Like, we know in this world that we're headed for AGI in 2043 or even earlier, so we should be seeing really capable and useful AI systems already in 2030 and 2035 and so on.

Maybe you think the progression from today's systems to potentially-transformative AGI will be discontinuous or something like that, with lots of progress (on algorithms, hardware, robotics, etc.) happening near the end?

Like Matthew, I think your paper is really interesting and impressive.

Some issues I have with the methodology:

  • Your framework excludes some factors that could cause the overall probability to increase.
    • For example, I can think of ways that a great power conflict (over Taiwan, say) actually increases the chances of TAI. But your framework doesn't easily account for this.
      • You could have factored it in in all or some of the other stages, but I'm not sure you have, and it seems generally like this asymmetry (the "positive" effect of an event is factored into various other stages if at all, but the "negative" effect of the same event is estimated on its own conjunctive stage) will tend to give lower overall probabilities than it should.
  • It seems like you sometimes don't fully condition on preceding propositions.
    • You calculate a base rate of "10% chance of [depression] in
      the next 20 years", and write: "Conditional on being in a world on track toward transformative AGI, we estimate a ~0.5%/yr chance of depression, implying a ~10% chance in the next 20 years."
      • But this doesn't seem like fully conditioning on a world with TAI that is cheap, that can automate ~100% of human tasks, and that can be deployed at scale, and that is relatively unregulated. It seems like once that happens, and when it's nearly happening (e.g. AIs automate 20% of 2022-tasks), the probability of a severe depression should be way below historical base rates?
    • Similarly for "We quickly scale up semiconductor manufacturing and
      electrical generation", it seems like you don't fully condition on a world where we have TAI that is cheap, that can automate ~100% of human tasks, and that can operate cheap, high-quality robots, and that can probably be deployed to some fairly wide extent even if not (yet) to actually automate ~all human labour.
      • Like, your X100 is 100x as cost-effective as the H100, but that doesn't seem that far off what you'd get from by just projecting the Epoch trend for ML GPU price-performance out 2 decades?
    • More generally, I think these sorts of things are really hard to get right (i.e. it's hard to imagine oneself in a conditional world, and estimate probabilities there without anchoring on the present world), and will tend to bias people to smaller overall estimates when using more conjunctive steps.

I reckon there's a pretty good chance he didn't sign because he wasn't asked, because he's a controversial figure.

The flaws and bugs that are most relevant to an AI’s performance in it’s domain of focus will be weeded out, but flaws outside of it’s relevant domain will not be. Bobby Fischer’s insane conspiracism had no effect on his chess playing ability. The same principle applies to stockfish. “Idiot savant” AI’s are entirely plausible, even likely.

[...]

For these reasons, I expect AGI to be flawed, and especially flawed when doing things it was not originally meant to do, like conquer the entire planet.

We might actually expect an AGI to be trained to conquer the entire planet, or rather to be trained in many of the abilities needed to do so. For example, we may train it to be good at things like:

  • Strategic planning
  • Getting humans to do what it wants effectively
  • Controlling physical systems
  • Cybersecurity
  • Researching new, powerful technologies
  • Engineering
  • Running large organizations
  • Communicating with humans and other AIs

Put differently, I think "taking control over humans" and "running a multinational corporation" (which seems like the sort of thing people will want AIs to be able to do) have lots more overlap than "playing chess" and "having true beliefs about subjects of conspiracies". I'd be curious to hear if you have thoughts about which specific abilities you expect an AGI would need to have to take control over humanity that it's unlikely to actually possess?

He's recently been vocal about AI X-Risk.

Yeah, but so have lots of people; it doesn't mean they're all longtermists. Same thing with Sam Altman -- I haven't seen any indication that he's longtermist, but would definitely be interested if you have any sources. This tweet seems to suggest that he does not consider himself a longtermist.

He funded Carrick Flynn's campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF.

Do you have a source on Schmidt funding Carrick Flynn's campaign? Jacobin links this Vox article which says he contributed to Future Forward, but it seems implied that it was to defeat Donald Trump. Though I actually don't think this is a strong signal, as Carrick Flynn was mostly campaigning on pandemic prevention and that seems to make sense on neartermist views too.

His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.

I know Schmidt Futures has "future" in its name, but as far as I can tell they're not especially focused on the long-term future. They seem to just want to boost innovation through scientific research and talent growth, but so does, like, nearly every government. For example, their Our Mission page does not mention the word "future".

It seems to have become almost commonplace for organisations that started from a longtermist seed to have become competitors in the AI arms race, so if many people who are influenced by longtermist philosophy end up doing stuff that seems harmful, we should update towards 'longtermism tends to be harmful in practice' much more than towards 'those people are not longtermists'.

I agree with this, but "longtermists may do harmful stuff" doesn't mean "this person doing harmful stuff is a longtermist". My understanding is that Schmidt (1) has never espoused views along the lines of "positively influencing the long-term future is a key moral priority of our time", and (2) seems to see AI/AGI kind of like the nuclear bomb -- a strategically important and potentially dangerous technology that the US should develop before its competitors.

I think there's something to this, but:

  • My impression of Eric Schmidt is that he is not a longtermist, and if anything has done lots to accelerate AI progress.
  • The October 7 controls have not "devastated critical supply chains". The linked article gives no evidence for this claim. China has something like 10% or less of the chip market share, and the export controls don't affect other countries' abilities to produce chips (though they do prevent some chips from being sold to China). Most fabs right now have utilization rates well below 100%, meaning they produce fewer chips than they could due to weak demand.
  • The October 7 controls also have not "upset markets" globally, or at least the linked article gives no evidence for this claim. Memory chip-makers like Samsung have seen profits fall, but this seems to be a normal business cycle thing --- semiconductors, and especially memory chips, are a cyclical industry, sensitive to consumer demand, and the current downturn is almost certainly related to the global financial downturn and associated reduction in consumer demand.
    • I think the October 7 controls have affected and will affect markets, but mostly by reducing profits of companies selling chips and equipment to China, and reducing the supply of some chips and equipment within China (their intended purpose). There'll probably be other, indirect effects down the line, but it's hard to say what those will be now.
  • I also note a tension between those two points -- the first blames the October 7 controls for there being a chip supply shortage, and the second blames the controls for there being a chip oversupply. Neither is true.
  • I disagree with the claims that the October 7 controls have "failed spectacularly at achieving their stated ambitions" and that despite them "China’s AI research has managed to continue apace".
    • I basically disagree with the linked article.
      • It states that Nvidia is releasing export-control-adapted versions of its chips with lower memory interconnect (to be below the export control thresholds) for the Chinese market. This is true, but the gap between the state of the art and what can be sold to China will grow.
      • It seems to suggest that compute will be less important in future. I think that's unlikely, at least for developing frontier models.
      • Another purpose of the October 7 controls was to limit Chinese chip-makers' access to equipment, materials and software, and it seems tentatively pretty successful at that (though time will tell).
  • I think the "increased West-China tensions" point is right though and fairly concerning.
  • I also think the "CSET was a major contributor to the October 7 controls" point is right, but whether this was ex ante good or bad probably depends on one's views on AI x-risk.

What do you mean by "resource" here?

This is a great comment and I think made me get much more of what you're driving at than the (much terser) top-level comment.

Seems maybe noteworthy that the decision cites Matthew Scully's piece in National Review. I wonder if having a respected conservative advocate for animals in a respected conservative outlet made any difference here? (Probably not given that the opinion doesn't hinge on animal welfare concerns.)

Load more