There's a whole chapter in superintelligence on human intelligence enhancement via selective breeding
This is false and should be corrected. There is a section (not a whole chapter) on biological enhancement, within which there is a single paragraph on selective breeding:
...A third path to greater-than-current-human intelligence is to enhance the functioning of biological brains. In principle, this could be achieved without technology, through selective breeding. Any attempt to initiate a classical large-scale eugenics program, however, would confront major po
[Update from Pablo & Matthew]
As we reached the one-year mark of Future Matters, we thought it a good moment to pause and reflect on the project. While the newsletter has been a rewarding undertaking, we’ve decided to stop publication in order to dedicate our time to new projects. Overall, we feel that launching Future Matters was a worthwhile experiment, which met (but did not surpass) our expectations. Below we provide some statistics and reflections.
Aggregated across platforms, we had between 1,000–1,800 impressions pe...
Listeners are likely to interpret, from your focus on character, and given your position as a leading EA speaking on the most prominent platform in EA - the opening talk at EAG - that this is all effective altruists should think about.
Really? I don't think I've ever encountered someone interpreting the topic of an EAG opening talk as being "all EAs should think about".
At EAG London 2022, they distributed hundreds of flyers and stickers depicting Sam on a bean bag with the text "what would SBF do?".
These were not an official EAG thing — they were printed by an individual attendee.
To my knowledge, never before were flyers depicting individual EAs at EAG distributed. (Also, such behavior seems generally unusual to me, like, imagine going to a conference and seeing hundreds of flyers and stickers all depicting one guy. Doesn't that seem a tad culty?
Yeah it was super weird.
This break even analysis would be more appropriate if the £15m had been ~burned, rather than invested in an asset which can be sold.
If I buy a house for £100k cash and it saves me £10k/year in rent (net costs), then after 10 years I've broken even in the sense of [cash out]=[cash in], but I also now have an asset worth £100k (+10y price change), so I'm doing much better than 'even'.
Agreed. And from perspective of the EA portfolio, the investment has some diversification benefits. YTD Oxford property prices are up +8% , whereas the rest of the EA portfolio (Meta/Asana/crypto) has dropped >50%.
Crossposting Carl Shulman's comment on a recent post 'The discount rate is not zero', which is relevant here:
...It's quite likely the extinction/existential catastrophe rate approaches zero within a few centuries if civilization survives, because:
- Riches and technology make us comprehensively immune to natural disasters.
- Cheap ubiquitous detection, barriers, and sterilization make civilization immune to biothreats
- Advanced tech makes neutral parties immune to the effects of nuclear winter.
- Local cheap production makes for small supply chains that can regrow
Nuclear war similarly can be justified without longtermism, which we know because this has been the case for many decades already
Much of the mobilization against nuclear risk from the 1940s onwards was explictly grounded in the threat of human extinction — from the Russell-Einsten manifesto to grassroots movements like Women Strike for Peace with the slogan "End the Arms Race not the Human Race"
Thanks for writing this, I like the forensic approach. I've long wished there was more discussion of the VWH paper, so it's been great to see yours and Maxwell Tabarrok's post in recent weeks.
Not an objection to your argument, but minor quibble with your reconstructed Bostrom argument:
P4: Ubiquitous real-time worldwide surveillance is the best way to decrease the risk of global catastrophes
I think it's worth noting that the paper's conclusion is that both ubiquitous surveillance and effective global governance are required for avoiding existent...
Hi Zach, thank you for your comment. I'll field this one, as I wrote both of the summaries.
This strongly suggests that Bostrom is commenting on LaMDA, but he's discussing "the ethics and political status of digital minds" in general.
I'm comfortable with this suggestion. Bostrom's comment was made (i.e. uploaded to nickbostrom.com) the day after the Lemoine story broke. (source: I manage the website).
"[Yudkowsky] recently announced that MIRI had pretty much given up on solving AI alignment"
I chose this phrasing on the basis of the second sentenc...
I'm trying to understand the simulation argument.
You might enjoy Joe Carlsmith's essay, Simulation Arguments (LW).
This Vox article by Dylan Matthews cites these two studies, which try to get at this question:
EDIT to add: here's a more recent analysis, looking at mortality impact up to 2018 — Kates et al. (2021)
btw — there's a short section on this in my old Existential Risk wikipedia draft. maybe some useful stuff to incorporate into this.
weak disagree. FWIW lots of good cites in endnotes to chapter 2 of The Precipice pp.305–12; and Moynihan's X-Risk.
I considered writing a post about the same biography you mentioned for the forum.
I would love to read such a post!
It's very humbling to see how much he already thought of, which we now call EA.
Agreed — I think the Ramsey/Keynes-era Apostles would make an interesting case study of a 'proto-EA' community.
Another historical precedent
In 1820, James Mill seeks permission for a plan to print and circulate 1,000 copies of his Essay on Government, originally published as a Supplement to Napier's Encyclopaedia Britannica:
...I have yet to speak to you about an application which has been made to me as to the article on Government, from certain persons, who think it calculated to disseminate very useful notions, and wish to give a stimulus to the circulation of them. Their proposal is, to print (not for sale, but gratis distribution) a thousand copies. I have refu
FWIW, and setting aside stylistic considerations for the Wiki, I dislike 'x-risk' as a term and avoid using it myself even in informal discussions.
I also kind of think everyone should read at least one biography, in particular of people who have become scientifically, intellectually, culturally, or politically influential.
Some biographies I've enjoyed in this vein:
With regards to the AGI timeline, it's important to note that Metaculus' resolution criteria are quite different from a 'standard' interpretation of what would constitute AGI[1], (or human-level AI[2], superintelligence[3], transformative AI, etc.). It's also unclear what proportion of forecasters have read this fine print (interested to hear others' views on this), which further complicates interpretation.
...For these purposes we will thus define "an artificial general intelligence" as a single unified software system that can satisfy the following crite
I work at FHI, as RA and project manager for Toby Ord/The Precipice (2018–20), and more recently as RA to Nick Bostrom (2020–). Prior to this, I spent 2 years in finance, where my role was effectively that of an RA (researching cement companies, rather than existential risk). All of the below is in reference to my time working with Toby.
Let me know if a longer post on being an RA would be useful, as this might motivate me to write it.
Impact
I think a lot of the impact can be captured in terms of being a multiplier[1] on their time, as discussed by Caroline ...
If there were more orgs doing this, there’d be the risk of abuse working with minors if in-person.
I think this deserves more than a brief mention. One of the two high school programs mentioned (ESPR) failed to safeguard students from someone later credibly accused of serious abuse, as detailed in CFAR's write-up:
...Of the interactions CFAR had with Brent, we consider the decision to let him assist at ESPR—a program we helped run for high school students—to have been particularly unwise ... We do not believe any students were harmed. However, Brent did in
Nice post. I’m reminded of this Bertrand Russell passage:
“all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man's achievement must inevitably be buried beneath the debris of a universe in ruins ... Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul's habitation henceforth be safely built.” —A Free Man’s Worship, 1903
I take Russell as arguing
Great post!
But based on Rowe & Beard's survey (as well as Michael Aird's database of existential risk estimates), no other sources appear to have addressed the likelihood of unknown x-risks, which implies that most others do not give unknown risks serious consideration.
I don't think this is true. The Doomsday Argument literature (Carter, Leslie, Gott etc.) mostly considers the probability of extinction independently of any specific risks, so these authors' estimates implicitly involve an assessment of unknown risks. Lots of this writing was before
I suppose they're roughly in line with my previous best guess. On the basis of the Annan and Hargreaves paper, on median BAU scenario the chance of >6K was about 1%. I think this is probably a bit too low because the estimates that ground that were not meant to systematically sample uncertainty about ECS. On the WCRS estimate, the chance of >6K is about 5%. (Annan and Hargreaves are co-authors on WCRS, so they have also updated).
One has to take account of uncertainty about emissions scenarios as well
[ii] Some queries to MacAskill’s Q&A show reverence here, (“I'm a longtime fan of all of your work, and of you personally. I just got your book and can't wait to read it.”, “You seem to have accomplished quite a lot for a young person (I think I read 28?). Were you always interested in doing the most good? At what age did you fully commit to that idea?”).
I share your concerns about fandom culture / guru worship in EA, and am glad to see it raised as a troubling feature of the community. I don’t think these examples are convincing, though. They stri
Hayek's Road to Serfdom, and twentieth century neoliberalism more broadly, owes a lot of its success to this sort of promotion. The book was published in 1944 and initially quite successful, but print runs were limited by wartime paper rationing. In 1945, the US magazine Reader's Digest created a 20-page condensed version, and sold 1 million of these very cheaply (5¢ per copy). Anthony Fisher, who founded the IEA, came across Hayek's ideas through this edition.
Great post — this is something EA should definitely be thinking more about as the canon of EA books grows and matures. Peter Singer has done it already, buying back the rights for TLYCS and distributing a free digital versions for its 10th anniversary.
I wonder whether most of the value of buying back rights could be captured by just buying books for people on request. A streamlined process for doing this could have pretty low overheads — it only takes a couple minutes to send someone a book via Amazon — and seems scalable. This should be eas
The key question here, is whether (and if so, to what degree) free download is a more effective means of distribution than regular book sales. So we should ask Peter Singer how the consumption of TLYCS changed with putting his book online. Or, if there are any other books that were distributed simultaneously across typical and unconventional means, then how many people did each distribution method reach?
Welcome to the forum!
Further development of a mathematical model to realise how important timelines for re-evolution are.
Re-evolution timelines have another interesting effect on overall risk — all else equal, the more confident one is that intelligence will re-evolve, the more confident one should be that we will be able to build AGI,* which should increase one’s estimate of existential risk from AI.
So it seems that AI risk gets a twofold ‘boost’ from evidence for a speedy re-emergence of intelligent life:
[disclosure: not an economist or investment professional]
emerging market bonds ... aren't (to my knowledge) distorted by the Fed buying huge amounts of bonds
This seems wrong — the spillover effects of 2008–13 QE on EM capital markets are fairly well-established (cf the 'Taper Tantrum' of 2013).
see e.g. Effects of US Quantitative Easing on Emerging Market Economies
"We find that an expansionary US QE shock has significant effects on financial variables in EMEs. It leads to an exchange rate appreciation, a reduction in l...
My top picks for April media relating to The Precipice:
I wasn't thinking about any implications like that really. My guess would be that the Kaya Identity isn't the right tool for thinking about either (i) extreme growth scenarios; or (ii) the fossil fuel endgame; and definitely not (iii) AI takeoff scenarios.
If I were more confident in the resource estimate, I would probably switch out the AI explosion scenario for a 'we burn all the fossil fuels' scenario. I'm not sure we can rule out the possibility that the actual limit is a few orders of magnitude more than 13.6PtC. IPCC cites Rog...
Also note that your estimate for emissions in the AI explosion scenario exceeds the highest estimates for how much fossil fuel there is left to burn. The upper bound given in IPCC AR5 (WG3.C7.p.525) is ~13.6 PtC (or ~5*10^16 tons CO2).
Awesome post!
The audiobook will not include the endnotes. We really couldn't see any good way of doing this, unfortunately.
Toby is right that there's a huge amount of great stuff in there, particularly for those already more familiar with existential risk, so I would highly recommend getting your hands on a physical or ebook version (IMO ebook is the best format for endnotes, since they'll be hyperlinked).
Thanks for writing this!
In the early stages, it will be doubling every week approximately
I’d be interested in pointers on how to interpret all the evidence on this:
In fact, x-risks that eliminate human life, but leave animal life unaffected would generally be almost negligible in value to prevent compared to preventing x-risks to animals and improving their welfare.
Eliminating human life would lock in a very narrow set of futures for animals - something similar to the status quo (minus factory farming) until the Earth becomes uninhabitable. What reason is there to think the difference between these futures, and those we could expect if humanity continues to exist, would be negligible?
As far as we know, humans are th...
Kudos! I see the blog is still hosted at ineffectivealtruismblog.com, though. Fortunately both reflectivealtruism.com and reflectivealtruismblog.com are currently available.