matthew.vandermerwe

810Joined Nov 2018

Bio

FHI - RA to Nick Bostrom (previously RA to Toby Ord on The Precipice)

Comments
54

Topic Contributions
3

Hi Haydn — the paper is about eruptions of magnitude 7 or greater, which includes magnitude 8. The periodicity figure I quote for magnitude 8 is taken directly from the paper. 

Hi Eli — this was my mistake; thanks for flagging. We'll correct the post.

Crossposting Carl Shulman's comment on a recent post 'The discount rate is not zero', which is relevant here:

It's quite likely the extinction/existential catastrophe rate approaches zero within a few centuries if civilization survives, because:

  1. Riches and technology make us comprehensively immune to  natural disasters.
  2. Cheap ubiquitous detection, barriers, and sterilization make civilization immune to biothreats
  3. Advanced tech makes neutral parties immune to the effects of nuclear winter.
  4. Local cheap production makes for small supply chains that can regrow from disruption as industry becomes more like information goods.
  5. Space colonization creates robustness against local disruption.
  6. Aligned AI blocks threats from misaligned AI (and many other things).
  7. Advanced technology enables stable policies (e.g. the same AI police systems enforce treaties banning WMD war for billions of years), and the world is likely to wind up in some stable situation (bouncing around until it does).

If we're more than 50% likely to get to that kind of robust state, which I think is true, and I believe Toby  does as well, then the life expectancy of civilization is very long, almost as long on a log scale as with 100%.

Your argument depends on  99%+++ credence that such safe stable states won't be attained, which is doubtful for 50% credence, and quite implausible at that level. A classic paper by the climate economist Martin Weitzman shows that the average discount rate over long periods  is set by the lowest plausible rate (as the possibilities of high rates drop out after a short period and you get a constant factor penalty for the probability of low discount rates, not exponential decay).

I doubt those coming up with the figures you cite believe per century risk is about 20% on average

Indeed! In The Precipice, Ord estimates a 50% chance that humanity never suffers an existential catastrophe (p.169).

Nuclear war similarly can be justified without longtermism, which we know because this has been the case for many decades already

Much of the mobilization against nuclear risk from the 1940s onwards was explictly grounded in the threat of human extinction — from the Russell-Einsten manifesto to grassroots movements like Women Strike for Peace with the slogan "End the Arms Race not the Human Race"

Thanks for writing this, I like the forensic approach. I've long wished there was more discussion of the VWH paper, so it's been great to see yours and Maxwell Tabarrok's post in recent weeks. 

Not an objection to your argument, but minor quibble with your reconstructed Bostrom argument:

P4: Ubiquitous real-time worldwide surveillance is the best way to decrease the risk of global catastrophes

I think it's worth noting that the paper's conclusion is that both ubiquitous surveillance and  effective global governance are required for avoiding existential catastrophe,[1] even if only discussing one of these.

[Disclaimer: I work for Nick Bostrom, these are my personal views]

  1. ^

    from conclusion: "We traced the root cause of our civilizational exposure to two structural properties of the contemporary world order: on the one hand, the lack of preventive policing capacity to block, with extremely high reliability, individuals or small groups from carrying out actions that are highly illegal; and, on the other hand, the lack of global governance capacity to reliably solve the gravest international coordination problems even when vital national interests by default incentivize states to defect. General stabilization against potential civilizational vulnerabilities [...] would require that both of these governance gaps be eliminated."

Hi Zach,  thank you for your comment. I'll field this one, as I wrote both of the summaries.

This strongly suggests that Bostrom is commenting on LaMDA, but he's discussing "the ethics and political status of digital minds" in general.

I'm comfortable with this suggestion. Bostrom's comment was made (i.e. uploaded to nickbostrom.com) the day after the Lemoine story broke. (source: I manage the website). 

"[Yudkowsky] recently announced that MIRI had pretty much given up on solving AI alignment"

I chose this phrasing on the basis of the second sentence of the post: "MIRI didn't solve AGI alignment and at least knows that it didn't." Thanks for pointing me to Bensinger's comment, which I hadn't seen. I remain confused by how much of the post should be interpreted literally vs tongue-in-cheek. I will add the following note into the summary: 

(Edit: Rob Bensinger clarifies in the comments that "MIRI has [not] decided to give up on reducing existential risk from AI.")

Thanks!

Cool Offices ?

Good/reliable AC and ventilation are very important IMO. 

I'm trying to understand the simulation argument.

You might enjoy Joe Carlsmith's essay, Simulation Arguments (LW).

This Vox article by Dylan Matthews cites these two studies, which try to get at this question:

EDIT to add: here's a more recent analysis, looking at mortality impact up to 2018 — Kates et al. (2021)

Load More