All of matthew.vandermerwe's Comments + Replies

Drawing on user feedback, I changed the name of my blog to Reflective Altruism.

Kudos! I see the blog is still hosted at ineffectivealtruismblog.com, though. Fortunately both reflectivealtruism.com and reflectivealtruismblog.com are currently available.

3
David Thorstad
Yep! Honestly, I'm not good at technology -- how do I change without making all of my backlinks go dead? [Edit: Sorry, that's probably the wrong term. I mean: all of my blog posts link to other blog posts. Is there a way to transfer domains that preserves all of those links?]

There's a whole chapter in superintelligence on human intelligence enhancement via selective breeding

This is false and should be corrected. There is a section (not a whole chapter) on biological enhancement, within which there is a single paragraph on selective breeding:

A third path to greater-than-current-human intelligence is to enhance the functioning of biological brains. In principle, this could be achieved without technology, through selective breeding. Any attempt to initiate a classical large-scale eugenics program, however, would confront major po

... (read more)
-11
titotal

[Update from Pablo & Matthew]

As we reached the one-year mark of Future Matters, we thought it a good moment to pause and reflect on the project.  While the newsletter has been a rewarding undertaking, we’ve decided to stop publication in order to dedicate our time to new projects. Overall, we feel that launching Future Matters was a worthwhile experiment, which met (but did not surpass) our expectations. Below we provide some statistics and reflections. 

Statistics

Aggregated across platforms, we had between 1,000–1,800 impressions pe... (read more)

Listeners are likely to interpret, from your focus on character, and given your position as a leading EA speaking on the most prominent platform in EA - the opening talk at EAG - that this is all effective altruists should think about.

Really? I don't think I've ever encountered someone interpreting the topic of an EAG opening talk as being "all EAs should think about".

3
MichaelPlant
Maybe I should have phrased what I'd said somewhat differently, but I expect EAs to very heavily take their cues from what established community leaders say, particularly when they speak in the 'prime time' slots.

At EAG London 2022, they distributed hundreds of flyers and stickers depicting Sam on a bean bag with the text "what would SBF do?". 

These were not an official EAG thing — they were printed by an individual attendee.

To my knowledge, never before were flyers depicting individual EAs at EAG distributed. (Also, such behavior seems generally unusual to me, like, imagine going to a conference and seeing hundreds of flyers  and stickers all depicting one guy. Doesn't that seem a tad culty?

Yeah it was super weird. 

2
David_Althaus
Ah thanks, I didn't know that! Sorry, could have noticed my confusion here. I edited the above comment.

This break even analysis would be more appropriate if the £15m had been ~burned, rather than invested in an asset which can be sold.

If I buy a house for £100k cash and it saves me £10k/year in rent (net costs), then after 10 years I've broken even in the sense of [cash out]=[cash in], but I also now have an asset worth £100k (+10y price change), so I'm doing much better than 'even'.

Agreed.. a good way to think about this is that since you get ~5% annual returns on stocks, annual rent equivalent is ~5% of the property value, and so the opportunity cost is spending ~$750k/y or $62.5k per month on conference accommodation.

7
HaydnBelfield
I agree

Agreed.  And from perspective of the EA portfolio, the investment has some diversification benefits. YTD Oxford property prices are up +8% , whereas the rest of the EA portfolio (Meta/Asana/crypto) has dropped >50%.

[anonymous]21
9
0

side point on a pet peeve. Raw house price increases don't account for the cost of improvements and renovations and the effect they might have on the value of property. eg Some houses might have gained in value because the owners added a bedroom

More and more media outlets are reporting [...]

I think the use of present tense here is a bit misleading, since almost all of these articles are from 5 or 6 weeks ago. 

6
dyj34650
Thank you, point taken, corrected it. I made a mental map that this is an ongoing process reaching from as far back as couple of weeks ago until now. But it can be seen other way. However, there is a question on why this wasn't discussed in the forums earlier, as the first reporting came in. Not a good sign if such stories on Will are in the media and nobody in EA noticed (myself included).

I'd love to see the Guesstimate model linked in the report, but the link doesn't work for me.

2
Pablo
In case this is useful to others, here is a working link. (Thanks to David Roodman for fixing it.)

Hi Haydn — the paper is about eruptions of magnitude 7 or greater, which includes magnitude 8. The periodicity figure I quote for magnitude 8 is taken directly from the paper. 

-2
HaydnBelfield
Hmm I strongly read it as focussed on magnitude 7. Eg In the paper they focus on magnitude 7 eruptions, and the 1/6 this century probability: "The last magnitude-7 event was in Tambora, Indonesia, in 1815." / "Given the estimated recurrence rate for a magnitude-7 event, this equates to more than US$1 billion per year." This would be corroborated by their thread, Forum post, and previous work, which emphasise 7 & 1/6. Sorry to be annoying/pedantic about this. I'm being pernickety as I view a key thrust of their research as distinguishing 7 from 8. We can't just group magnitude 7 (1/6 chance) along with magnitude 8 and write them off as a teeny 1/14,000 chance. We need to distinguish 7 from 8, consider their severity/probability seperately, and prioritise them differently.

Hi Eli — this was my mistake; thanks for flagging. We'll correct the post.

Crossposting Carl Shulman's comment on a recent post 'The discount rate is not zero', which is relevant here:

It's quite likely the extinction/existential catastrophe rate approaches zero within a few centuries if civilization survives, because:

  1. Riches and technology make us comprehensively immune to  natural disasters.
  2. Cheap ubiquitous detection, barriers, and sterilization make civilization immune to biothreats
  3. Advanced tech makes neutral parties immune to the effects of nuclear winter.
  4. Local cheap production makes for small supply chains that can regrow
... (read more)

I doubt those coming up with the figures you cite believe per century risk is about 20% on average

Indeed! In The Precipice, Ord estimates a 50% chance that humanity never suffers an existential catastrophe (p.169).

Nuclear war similarly can be justified without longtermism, which we know because this has been the case for many decades already

Much of the mobilization against nuclear risk from the 1940s onwards was explictly grounded in the threat of human extinction — from the Russell-Einsten manifesto to grassroots movements like Women Strike for Peace with the slogan "End the Arms Race not the Human Race"

1
Brian Lui
Yes, exactly - it's grounded in concern about human extinction, not longtermism. The section "We can achieve longtermism without longtermism" in my posts talks about the difference.
2[anonymous]
Concern about the threat of human extinction is not longtermism (see Scott Alexander's well known forum post about this), which I think is the point that the OP is making.

Thanks for writing this, I like the forensic approach. I've long wished there was more discussion of the VWH paper, so it's been great to see yours and Maxwell Tabarrok's post in recent weeks. 

Not an objection to your argument, but minor quibble with your reconstructed Bostrom argument:

P4: Ubiquitous real-time worldwide surveillance is the best way to decrease the risk of global catastrophes

I think it's worth noting that the paper's conclusion is that both ubiquitous surveillance and  effective global governance are required for avoiding existent... (read more)

Hi Zach,  thank you for your comment. I'll field this one, as I wrote both of the summaries.

This strongly suggests that Bostrom is commenting on LaMDA, but he's discussing "the ethics and political status of digital minds" in general.

I'm comfortable with this suggestion. Bostrom's comment was made (i.e. uploaded to nickbostrom.com) the day after the Lemoine story broke. (source: I manage the website). 

"[Yudkowsky] recently announced that MIRI had pretty much given up on solving AI alignment"

I chose this phrasing on the basis of the second sentenc... (read more)

Cool Offices ?

Good/reliable AC and ventilation are very important IMO. 

I'm trying to understand the simulation argument.

You might enjoy Joe Carlsmith's essay, Simulation Arguments (LW).

This Vox article by Dylan Matthews cites these two studies, which try to get at this question:

EDIT to add: here's a more recent analysis, looking at mortality impact up to 2018 — Kates et al. (2021)

2
Pablo
Thanks! Coincidentally, I also found Dylan's article (as well as another study from 2015) and added an answer based on it, before seeing yours. EDIT: Oh, I now see that you were linking to an earlier piece by Dylan from mid-2015, also published in Vox. The article in my answer is from late 2018.

btw — there's a short section on this in my old Existential Risk wikipedia draft. maybe some useful stuff to incorporate into this. 

2
Pablo
I tried to incorporate parts of that section, and in the process reorganized and expanded the article. Feel free to edit anything that seems inadequate.

weak disagree. FWIW lots of good cites in endnotes to chapter 2 of The Precipice pp.305–12; and  Moynihan's X-Risk.

3
matthew.vandermerwe
btw — there's a short section on this in my old Existential Risk wikipedia draft. maybe some useful stuff to incorporate into this. 

I considered writing a post about the same biography you mentioned for the forum.

I would love to read such a post! 

It's very humbling to see how much he already thought of, which we now call EA. 

Agreed — I think the Ramsey/Keynes-era Apostles would make an interesting case study of a 'proto-EA' community. 

Another historical precedent

In 1820, James Mill seeks permission for a plan to print and circulate 1,000 copies of his Essay on Government, originally published as a Supplement to Napier's Encyclopaedia Britannica:

I have yet to speak to you about an application which has been made to me as to the article on Government, from certain persons, who think it calculated to disseminate very useful notions, and wish to give a stimulus to the circulation of them. Their proposal is, to print (not for sale, but gratis distribution) a thousand copies. I have refu

... (read more)

FWIW, and setting aside stylistic considerations for the Wiki, I dislike 'x-risk' as a term and avoid using it myself even in informal discussions. 

  • it's ambiguous between 'extinction' and 'existential', which is already a common confusion
  • it seems unserious and somewhat flippant (vaguely comic book/sci-fi vibes)
  • the 'x' prefix can denote the edgy, or sexual  (e.g. X Games; x-rated; Generation X?)
  • 'x' also often denotes an unknown value (e.g. in 'Cause X' — another abbreviation I dislike; or indeed Stefan's comment earlier in this thread)
2
MichaelA🔸
Thanks for this comment. I was already aware of the first two downsides, and often lean away from the term for those reasons. But I hadn't considered the other two downsides, and they make sense to me, so this updates me towards more consistently avoiding the term. Out of interest, do you use "x-risk" in e.g. Slack threads, google doc comments, and conversations at lunch? I.e., in contexts that are not just informal but also private and two-way (so it's easier to notice if something has been misunderstood or left a bad impression)? I think by default I'd continue to do that myself.

I prefer this option to all others mentioned here. 

I also kind of think everyone should read at least one biography, in particular of people who have become scientifically, intellectually, culturally, or politically influential.

Some biographies I've enjoyed in this vein:

  • Frank Ramsey: A Sheer Excess of Powers
  • The Price of Peace: Money, Democracy, and the Life of John Maynard Keynes
  • Karl Marx: a Nineteenth-Century Life
8
annaleptikon
Came here by searching for Frank Ramsey on the forum. I considered writing a post about the same biography you mentioned for the forum. It's very humbling to see how much he already thought of, which we now call EA.  A related work I can recommend is "Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science"

With regards to the AGI timeline, it's important to note that Metaculus' resolution criteria are quite different from a 'standard' interpretation of what would constitute AGI[1], (or human-level AI[2], superintelligence[3], transformative AI, etc.). It's also unclear what proportion of forecasters have read this fine print (interested to hear others' views on this), which further complicates interpretation.

For these purposes we will thus define "an artificial general intelligence" as a single unified software system that can satisfy the following crite

... (read more)
4
jacobpfau
Agreed, I've been trying to help out a bit with Matt Barnett's new question here. Feedback period is still open, so chime in if you have ideas! I suspect most Metaculites are accustomed to paying attention to how a question's operationalization deviates from its intent FWIW. Personally, I find the Montezuma's revenge criterion quite important without which the question would be far from AGI. My intent with bringing up this question, was more to ask about how Linch thinks about the reliability of long-term predictions with no obvious frequentist-friendly track record to look at.

I work at FHI, as RA and project manager for Toby Ord/The Precipice (2018–20), and more recently as RA to Nick Bostrom (2020–). Prior to this, I spent 2 years in finance, where my role was effectively that of an RA (researching cement companies, rather than existential risk). All of the below is in reference to my time working with Toby.

Let me know if a longer post on being an RA would be useful, as this might motivate me to write it.

Impact

I think a lot of the impact can be captured in terms of being a multiplier[1] on their time, as discussed by Caroline ... (read more)

2
MichaelA🔸
Thanks for this answer - I've shared links to it with several people, and will also link to it in my sequence on Improving the EA-Aligned Research Pipeline. I've also now posted notes from a call with someone else who's an RA to a great researcher, who likewise thought this role was great for his learning.
3[anonymous]
Sorry about the late answer. I just wanted to say that I also upvoted your comment because I would be very interested in a longer piece on being an RA.

Matthew, thanks for your response! Very handy to have some names I might get in contact with, and this is turning out to be higher-impact than I thought. Can you say any more on how EA-specific your career capital might be?

I'd be very interested in a longer post on the subject!

If there were more orgs doing this, there’d be the risk of abuse working with minors if in-person.

I think this deserves more than a brief mention. One of the two high school programs mentioned (ESPR) failed to safeguard students from someone later credibly accused of serious abuse, as detailed in CFAR's write-up:

Of the interactions CFAR had with Brent, we consider the decision to let him assist at ESPR—a program we helped run for high school students—to have been particularly unwise ... We do not believe any students were harmed. However, Brent did in

... (read more)

Nice post. I’m reminded of this Bertrand Russell passage:

“all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man's achievement must inevitably be buried beneath the debris of a universe in ruins ... Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul's habitation henceforth be safely built.” —A Free Man’s Worship, 1903

I take Russell as arguing

... (read more)

Thanks — I agree with this, and should have made clearer that I didn't see my comment as undermining the thrust of Michael's argument, which I find quite convincing.

Great post!

But based on Rowe & Beard's survey (as well as Michael Aird's database of existential risk estimates), no other sources appear to have addressed the likelihood of unknown x-risks, which implies that most others do not give unknown risks serious consideration.

I don't think this is true. The Doomsday Argument literature (Carter, Leslie, Gott etc.) mostly considers the probability of extinction independently of any specific risks, so these authors' estimates implicitly involve an assessment of unknown risks. Lots of this writing was before

... (read more)
4
MichaelDickens
Thanks for this perspective! I've heard of the Doomsday Argument but I haven't read the literature. My understanding was that the majority belief is that the Doomsday Argument is wrong, we just haven't figured out why it's wrong. I didn't realize there was substantial literature on the problem, so I will need to do some reading! I think it is still accurate to claim that very few sources have considered the probability of unknown risks relative to known risks. I'm mainly basing this off the Rowe & Beard literature review, which is pretty comprehensive AFAIK. Leslie and Bostrom discuss unknown risks, but without addressing their relative probabilities (at least Bostrom doesn't, I don't have access to Leslie's book right now). If you know of any sources that address this that Rowe & Beard didn't cover, I'd be happy to hear about them.
9
Pablo
I agree that the literature on the Doomsday Argument involves an implicit assessment of unknown risks, in the sense that any residual probability mass assigned to existential risk after deducting the known x-risks must fall under the unknown risks category. (Note that our object-level assessment of specific risks may cause us to update our prior general risk estimates derived from the Doomsday Argument.) Still, Michael's argument is not based on anthropic considerations, but on extrapolation from the rate of x-risk discovery. These are two very different reasons for revising our estimates of unknown x-risks, so it's important to keep them separate. (I don't think we disagree; I just thought this was worth highlighting.)

Very useful comment — thanks.

Overall, I don't view this as especially good news ...

How do these tail values compare with your previous best guess?

[anonymous]12
0
0

I suppose they're roughly in line with my previous best guess. On the basis of the Annan and Hargreaves paper, on median BAU scenario the chance of >6K was about 1%. I think this is probably a bit too low because the estimates that ground that were not meant to systematically sample uncertainty about ECS. On the WCRS estimate, the chance of >6K is about 5%. (Annan and Hargreaves are co-authors on WCRS, so they have also updated).

One has to take account of uncertainty about emissions scenarios as well

[ii] Some queries to MacAskill’s Q&A show reverence here, (“I'm a longtime fan of all of your work, and of you personally. I just got your book and can't wait to read it.”, “You seem to have accomplished quite a lot for a young person (I think I read 28?). Were you always interested in doing the most good? At what age did you fully commit to that idea?”).

I share your concerns about fandom culture / guru worship in EA, and am glad to see it raised as a troubling feature of the community. I don’t think these examples are convincing, though. They stri

... (read more)

Hayek's Road to Serfdom, and twentieth century neoliberalism more broadly, owes a lot of its success to this sort of promotion. The book was published in 1944 and initially quite successful, but print runs were limited by wartime paper rationing. In 1945, the US magazine Reader's Digest created a 20-page condensed version, and sold 1 million of these very cheaply (5¢ per copy). Anthony Fisher, who founded the IEA, came across Hayek's ideas through this edition.

Source: https://press.uchicago.edu/Misc/Chicago/320553.html

Great post — this is something EA should definitely be thinking more about as the canon of EA books grows and matures. Peter Singer has done it already, buying back the rights for TLYCS and distributing a free digital versions for its 10th anniversary.

I wonder whether most of the value of buying back rights could be captured by just buying books for people on request. A streamlined process for doing this could have pretty low overheads — it only takes a couple minutes to send someone a book via Amazon — and seems scalable. This should be eas

... (read more)

I've set up a system for buying books for people on request. If people are interested in using it you can read more and express interest here: eabooksdirect.super.site 

2
Bastian_Stern
Presumably, the trends in Goodreads ratings/reviews need to be interpreted in the context of the (considerable) growth in Goodreads' active users over time, and for that reason, linear-ish trends in the Goodreads data actually point towards more frontloaded growth profiles for sales/# of people who have read the books?
2
Cullen 🔸
This is a good idea as well, though it could have the downside of preventing some of the more creative uses of community-owned digital distribution such as aiding translation and making excerpting easier. I think something closer to a Creative Commons license for digital versions would be best (though the publisher might not agree to that).
4
Cullen 🔸
Ah yes, I forgot that we already did this for TLYCS. Would be good to see a retrospective on this :-) The EA Meta Fund gave $10,000 for this, which seems very worthwhile. Of course, this may not be the full cost, and this also covered some other things. I like that they included free audiobooks; we should probably do that too if we pursue this.

The key question here, is whether (and if so, to what degree) free download is a more effective means of distribution than regular book sales. So we should ask Peter Singer how the consumption of TLYCS changed with putting his book online. Or, if there are any other books that were distributed simultaneously across typical and unconventional means, then how many people did each distribution method reach?

Welcome to the forum!

Further development of a mathematical model to realise how important timelines for re-evolution are.

Re-evolution timelines have another interesting effect on overall risk — all else equal, the more confident one is that intelligence will re-evolve, the more confident one should be that we will be able to build AGI,* which should increase one’s estimate of existential risk from AI.

So it seems that AI risk gets a twofold ‘boost’ from evidence for a speedy re-emergence of intelligent life:

  • Relative AI ris
... (read more)
1
RobertHarling
Thanks for your comment Matthew. This is definitely an interesting effect which I had not considered. I wonder whether though the absolute AI risk may increase, it would not affect our actions as we would have no way to affect the development of AI by future intelligent life as we would be extinct. The only way I could think of to affect the risk of AI from future life would be to create an aligned AGI ourselves before humanity goes extinct!

[disclosure: not an economist or investment professional]

emerging market bonds ... aren't (to my knowledge) distorted by the Fed buying huge amounts of bonds

This seems wrong — the spillover effects of 2008–13 QE on EM capital markets are fairly well-established (cf the 'Taper Tantrum' of 2013).

see e.g. Effects of US Quantitative Easing on Emerging Market Economies

"We find that an expansionary US QE shock has significant effects on financial variables in EMEs. It leads to an exchange rate appreciation, a reduction in l
... (read more)

My top picks for April media relating to The Precipice:

I wasn't thinking about any implications like that really. My guess would be that the Kaya Identity isn't the right tool for thinking about either (i) extreme growth scenarios; or (ii) the fossil fuel endgame; and definitely not (iii) AI takeoff scenarios.

If I were more confident in the resource estimate, I would probably switch out the AI explosion scenario for a 'we burn all the fossil fuels' scenario. I'm not sure we can rule out the possibility that the actual limit is a few orders of magnitude more than 13.6PtC. IPCC cites Rog... (read more)

Also note that your estimate for emissions in the AI explosion scenario exceeds the highest estimates for how much fossil fuel there is left to burn. The upper bound given in IPCC AR5 (WG3.C7.p.525) is ~13.6 PtC (or ~5*10^16 tons CO2).

Awesome post!

2[anonymous]
haha yes thanks matthew, that's a good spot! So, the thought is that we would have some non-trivial probability mass on burning all the fossil fuels if there is an AI explosion. My best guess would be that this makes working on AI better than working on marginal climate stuff but I'm not sure how to think about this yet

The audiobook will not include the endnotes. We really couldn't see any good way of doing this, unfortunately.

Toby is right that there's a huge amount of great stuff in there, particularly for those already more familiar with existential risk, so I would highly recommend getting your hands on a physical or ebook version (IMO ebook is the best format for endnotes, since they'll be hyperlinked).

2
Jonas_
For those looking for the ebook, it's only available on the Canadian, German, and Australian (cheapest) amazon pages (but not US / UK ones). (EDIT: Actually available on the UK store.)
1
Jest(comment)er
Infinite Jest has them at the end of the audiobook in chaptered (clickable) segments, iirc
1
MichaelA🔸
Thanks. Yes, I'll get the ebook then.

I will investigate this and get back to you!

Thanks for writing this!

In the early stages, it will be doubling every week approximately

I’d be interested in pointers on how to interpret all the evidence on this:

  • until Jan 4: (Li et al) find 7.4 days
  • Jan 16–Jan 30: (Cheng & Shan) find ~1.8 days in China, before quarantine measures start kicking in.
  • Jan 20–Feb 6: (Muniz-Rodriguez et al) find 2.5 for Hubei [95%: 2.4–2.7], and other provinces ranging from 1.5 to 3.0 (with much wider error bars).
  • Eyeballing the most recent charts:
... (read more)
7
eca
Yeah 7 days was intended to be a reasonable conservative guess. My actual guess is closer to 5.5. As you point out there are testing artifacts that point in both directions. Within china, test shortages, outside of china, slower testing roll out. I'm not an epi expert but I think the gold standard here would be to do something like time-series immune surveillance, where you randomly sample a large group of people from a pop and test them for an antibody reaction and/ or viral RNA, then do the same at intervals later. My guess is this is challenging because of the number of samples required to detect in most places, but maybe if you did this somewhere like italy you could pull it off (you get the population abundance as well). Its also the case that this isn't a fixed number, and you expect it to vary from population to population based on fraction of asymptomatic cases, social distancing, pop density etc. So I'm not sure we'll get a better number than 2-8 days in the short term, which is disconcerting given how big of a difference it makes to risk forecasts. I'd love to hear from anyone with more epi expertise!
In fact, x-risks that eliminate human life, but leave animal life unaffected would generally be almost negligible in value to prevent compared to preventing x-risks to animals and improving their welfare.

Eliminating human life would lock in a very narrow set of futures for animals - something similar to the status quo (minus factory farming) until the Earth becomes uninhabitable. What reason is there to think the difference between these futures, and those we could expect if humanity continues to exist, would be negligible?

As far as we know, humans are th... (read more)

2
abrahamrowe
Hey, Yes, this was noted in the sentence following you quote and the paragraphs after this one. Note that if humans implemented extremely resilient interventions, human-focused x-risks might be of less value, but I broadly agree humanity's moral personhood is a good reason to think that x-risks impacting humans are valuable to work on. Reading through my conclusions again, I could have been a bit more clear on this.

Yeah - plus the opportunity cost of having it in cash. Looks like a non-starter.

Yeah, and how much you value the flexibility depends on what you expect to donate to.

EA Funds already allows you to donate to small/speculative projects, non-UK charities, etc, via a UK registered charity, so 'only ever donating to UK charities' is less restrictive than it sounds..

Yes that's what I meant - will edit for clarity

6
Henry Stanley 🔸
Gotcha. Thanks for the answer - I guess UK DAFs will only ever allow you to donate to UK charities, so maybe the lack of flexibility isn't worth it.
Load more