matthew.vandermerwe

FHI - RA to Toby Ord, Nick Bostrom

matthew.vandermerwe's Comments

Should EA Buy Distribution Rights for Foundational Books?

Hayek's Road to Serfdom, and twentieth century neoliberalism more broadly, owes a lot of its success to this sort of promotion. The book was published in 1944 and initially quite successful, but print runs were limited by wartime paper rationing. In 1945, the US magazine Reader's Digest created a 20-page condensed version, and sold 1 million of these very cheaply (5¢ per copy). Anthony Fisher, who founded the IEA, came across Hayek's ideas through this edition.

Source: https://press.uchicago.edu/Misc/Chicago/320553.html

Should EA Buy Distribution Rights for Foundational Books?

Great post — this is something EA should definitely be thinking more about as the canon of EA books grows and matures. Peter Singer has done it already, buying back the rights for TLYCS and distributing a free digital versions for its 10th anniversary.

I wonder whether most of the value of buying back rights could be captured by just buying books for people on request. A streamlined process for doing this could have pretty low overheads — it only takes a couple minutes to send someone a book via Amazon — and seems scalable. This should be easy enough for a donor or EA org to try.

I also imagine that for most publishers, profits are concentrated after release

I looked into this recently, using Goodreads data as a proxy for sales. My takeaway was that sales of these books have been surprisingly linear over time, rather than being concentrated early on: Superintelligence; Doing Good Better; TLYCS

X-risks to all life v. to humans

Welcome to the forum!

Further development of a mathematical model to realise how important timelines for re-evolution are.

Re-evolution timelines have another interesting effect on overall risk — all else equal, the more confident one is that intelligence will re-evolve, the more confident one should be that we will be able to build AGI,* which should increase one’s estimate of existential risk from AI.

So it seems that AI risk gets a twofold ‘boost’ from evidence for a speedy re-emergence of intelligent life:

  • Relative AI risk increases, since risk from most other sources is discounted a bit.
  • Absolute AI risk increases, since it pushes towards shorter AGI timelines.

*Shulman & Bostrom 2012 discuss this type of argument, and some complexities in adjusting for observation selection effects

How Much Leverage Should Altruists Use?

[disclosure: not an economist or investment professional]

emerging market bonds ... aren't (to my knowledge) distorted by the Fed buying huge amounts of bonds

This seems wrong — the spillover effects of 2008–13 QE on EM capital markets are fairly well-established (cf the 'Taper Tantrum' of 2013).

see e.g. Effects of US Quantitative Easing on Emerging Market Economies

"We find that an expansionary US QE shock has significant effects on financial variables in EMEs. It leads to an exchange rate appreciation, a reduction in long-term bond yields, a stock market boom, and an increase in capital inflows to these countries."
EA Updates for April 2020

My top picks for April media relating to The Precipice:

How hot will it get?

I wasn't thinking about any implications like that really. My guess would be that the Kaya Identity isn't the right tool for thinking about either (i) extreme growth scenarios; or (ii) the fossil fuel endgame; and definitely not (iii) AI takeoff scenarios.

If I were more confident in the resource estimate, I would probably switch out the AI explosion scenario for a 'we burn all the fossil fuels' scenario. I'm not sure we can rule out the possibility that the actual limit is a few orders of magnitude more than 13.6PtC. IPCC cites Rogner 2014 for the figure. In personal communication, one scientist described Rogner's previous (1997) estimate as:

a mishmash of unreliable information, including self-reported questionnaires by individual governments

It would be great to better understand these estimates — I'm surprised there isn't more work on this. In particular, you'd think there would be geologically-based models of how much carbon there is, that aren't so strongly grounded in known-reserves + current/near-term technological capabilities.

How hot will it get?

Also note that your estimate for emissions in the AI explosion scenario exceeds the highest estimates for how much fossil fuel there is left to burn. The upper bound given in IPCC AR5 (WG3.C7.p.525) is ~13.6 PtC (or ~5*10^16 tons CO2).

Awesome post!

Toby Ord’s ‘The Precipice’ is published!

The audiobook will not include the endnotes. We really couldn't see any good way of doing this, unfortunately.

Toby is right that there's a huge amount of great stuff in there, particularly for those already more familiar with existential risk, so I would highly recommend getting your hands on a physical or ebook version (IMO ebook is the best format for endnotes, since they'll be hyperlinked).

Load More