HaydnBelfield

Comments

Working in Parliament: How to get a job & have an impact

On the other hand, this isn't as much of a constraint in opposition. Political Advisors are like senior senior parliamentary researchers - everyone's part of one (tiny!) team.

Draft report on existential risk from power-seeking AI

Oh and:

4. Cotra aims to predict when it will be possible for "a single computer program  [to] perform a large enough diversity of intellectual labor at a high enough level of performance that it alone can drive a transition similar to the Industrial Revolution." - that is a "growth rate [of the world economy of] 20%-30% per year if used everywhere it would be profitable to use"

Your scenario is premise 4 "Some deployed APS systems will be exposed to inputs where they seek power in unintended and high-impact ways (say, collectively causing >$1 trillion dollars of damage), because of problems with their objectives" (italics added).

Your bar is (much?) lower, so we should expect your scenario to come (much?) earlier.

Draft report on existential risk from power-seeking AI

Hey Joe!

Great report, really fascinating stuff. Draws together lots of different writing on the subject, and I really like how you identify concerns that speak to different perspectives (eg to Drexler's CAIS and classic Bostrom superintelligence).

Three quick bits of feedback:

  1. I feel like some of Jess Whittlestone and collaborators' recent research would be helpful in your initial framing, eg 
    1. Prunkl, C. and Whittlestone, J. (2020). Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society. - on capability vs impact
    2.  Gruetzemacher, R. and Whittlestone, J. (2019). The Transformative Potential of Artificial Intelligence. - on different scales of impact 
    3. Cremer, C. Z., & Whittlestone, J. (2021). Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI. - on milestones and limitations
  2. I don't feel like you do quite enough to argue for premise 5 "Some of this power-seeking will scale (in aggregate) to the point of permanently disempowering ~all of humanity | (1)-(4)."
    Which is, unfortunately, a pretty key premise and the one I have the most questions about! My impression is that section 6.3 is where that argumentation is intended to occur, but I didn't leave it with a sense of how you thought this would scale, disempower everyone, and be permanent. Would love for you to say more on this.
  3. On a related, but distinct point, one thing I kept thinking is "does it matter that much if its an AI system that takes over the world and disempowers most people?". Eg you set out in 6.3.1 a number of mechanisms by which an AI system could gain power - but 10 out of the 11 you give (all except Destructive capacity)  seem relevant to a small group of humans in control of advanced capabilities too.
    Presumably we should also be worried about a small group doing this as well? For example, consider a scenario in which a powerhungry small group, or several competing groups, use aligned AI systems with advanced capabilities (perhaps APS, perhaps not) to the point of permanently disempowering ~all of humanity.
    If I went through and find-replaced all the "PS-misaligned AI system" with "power-hungry small group", would it read that differently? To borrow Tegmark's terms, does it matter if its Omega Team or Prometheus?
    I'd be interested in seeing some more from you about whether you're also concerned about that scenario, whether you're more/less concerned, and how you think its different from the AI system scenario.

Again, really loved the report, it is truly excellent work.

What do you make of the doomsday argument?

Indeed. Seems supported by a quantum suicide argument - no matter how unlikely the observer, there always has to be a feeling of what-its-like-to-be that observer.

https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality

AMA: Tom Chivers, science writer, science editor at UnHerd

It's worth adding that both Stephen Bush and Jeremy Cliffe at the New Statesman both do prediction posts and review them at the end of each year. The meme is spreading! They're also two of the best journalists to follow about UK Labour politics (Bush) and EU politics (Cliffe) - if you're interested in those topics, as I am.

https://www.newstatesman.com/politics/staggers/2020/12/what-i-got-right-and-wrong-2020

https://www.newstatesman.com/international/places/2020/12/january-i-made-ten-predictions-2020-how-did-they-turn-out

Is Democracy a Fad?

I think the closest things we've got that's similar to this are:

Luke Muehlhauser's work on 'amateur macrohistory' https://lukemuehlhauser.com/industrial-revolution/ 

The (more academic) Peter Turchin's Seshat database: http://seshatdatabank.info/ 

Is Democracy a Fad?

I would say more optimistic. I think there's a pretty big difference between emergence (a shift from authoritarianism to democracy) - and democratic backsliding, that is autocratisation (a shift from democracy to authoritarianism). Once that shift has consolidated, there's lots of changes that makes it self-reinforcing/path-dependent: norms and identities shift, economic and political power shifts, political institutions shift, the role of the military shifts. Some factors are the same for emergence and persistence, like wealth/growth, but some aren't (which I would say are pretty key) like getting authoritarian elites to accept democratisation.

Two books on emergence that I've found particularly interesting are 

  • The international dimensions of democratization: Europe and the Americas; edited by Laurence Whitehead 2001 (on underplayed international factors)
  • Conservative parties and the birth of democracy; Daniel Ziblatt 2017  (on buying off elites to accept this permanent change)

However as I said, the impact of AI systems does raise uncertainty, and is super fascinating.

Something I'm very concerned about, which I don't believe you touched, is the fate of democracies after a civilizational collapse. I've got a book chapter coming out on this later this year, that I hope I may be able to share a preprint of.

Is Democracy a Fad?

Interesting post! If you wanted to read into the comparative political science literature a little more, you might be interested in diving into the subfield of democratic backsliding (as opposed to emergence):

  • A third wave of autocratization is here: what is new about it? Lührmann & Lindberg  2019
  • How Democracies Die. Steven Levitsky and Daniel Ziblatt 2018
  • On Democratic Backsliding  Bermeo, Nancy 2016
  • Two Modes of Democratic Breakdown: A Competing Risks Analysis of Democratic Durability; Maeda, K. 201
  • Authoritarian Reversals and Democratic Consolidation in American Political Science Review; Milan Svolik; 2008
  • Institutional Design and Democratic Consolidation in the Third World Timothy J. Power; Mark J. Gasiorowski; 04/1997
  • What Makes Democracies Endure? Jose Antonio Cheibub; Adam Przeworski; Fernando Papaterra Limongi Neto; Michael M. Alvarez 1996
  • The breakdown of democratic regimes: crisis, breakdown, and reequilibration Book  by Juan J. Linz 1978

One of the common threads in this subfield is that once a democracy has 'consolidated',  it seems to be fairly resilient to coups and perhaps incumbent takeover. 

I certainly agree that how this interacts with new AI systems: automation, surveillance and targeting/profiling, and autonomous weapons systems is absolutely fascinating. For one early stab, you might be interested in my colleagues':

Response to Phil Torres’ ‘The Case Against Longtermism’

That's right, I think they should be higher priorities. As you show in your very useful post, Ord has nuclear and climate change at 1/1000 and AI at 1/10. I've got a draft book chapter on this, which I hope to be able to share a preprint of soon. 

Load More