All of matthew.vandermerwe's Comments + Replies

On the Vulnerable World Hypothesis

Thanks for writing this, I like the forensic approach. I've long wished there was more discussion of the VWH paper, so it's been great to see yours and Maxwell Tabarrok's post in recent weeks. 

Not an objection to your argument, but minor quibble with your reconstructed Bostrom argument:

P4: Ubiquitous real-time worldwide surveillance is the best way to decrease the risk of global catastrophes

I think it's worth noting that the paper's conclusion is that both ubiquitous surveillance and  effective global governance are required for avoiding existent... (read more)

Future Matters #3: digital sentience, AGI ruin, and forecasting track records

Hi Zach,  thank you for your comment. I'll field this one, as I wrote both of the summaries.

This strongly suggests that Bostrom is commenting on LaMDA, but he's discussing "the ethics and political status of digital minds" in general.

I'm comfortable with this suggestion. Bostrom's comment was made (i.e. uploaded to the day after the Lemoine story broke. (source: I manage the website). 

"[Yudkowsky] recently announced that MIRI had pretty much given up on solving AI alignment"

I chose this phrasing on the basis of the second sentenc... (read more)

Cool Offices ?

Cool Offices ?

Good/reliable AC and ventilation are very important IMO. 

Simulation argument?

I'm trying to understand the simulation argument.

You might enjoy Joe Carlsmith's essay, Simulation Arguments (LW).

How many lives has the U.S. President's Emergency Plan for AIDS Relief (PEPFAR) saved?

This Vox article by Dylan Matthews cites these two studies, which try to get at this question:

EDIT to add: here's a more recent analysis, looking at mortality impact up to 2018 — Kates et al. (2021)

Thanks! Coincidentally, I also found Dylan's article (as well as another study from 2015) and added an answer [] based on it, before seeing yours. EDIT: Oh, I now see that you were linking to an earlier piece by Dylan from mid-2015, also published in Vox. The article in my answer is from late 2018.
Ethics of existential risk

btw — there's a short section on this in my old Existential Risk wikipedia draft. maybe some useful stuff to incorporate into this. 

I tried to incorporate parts of that section, and in the process reorganized and expanded the article. Feel free to edit anything that seems inadequate.
Ethics of existential risk

weak disagree. FWIW lots of good cites in endnotes to chapter 2 of The Precipice pp.305–12; and  Moynihan's X-Risk.

btw — there's a short section [] on this in my old Existential Risk wikipedia draft []. maybe some useful stuff to incorporate into this.
What are things everyone here should (maybe) read?

I considered writing a post about the same biography you mentioned for the forum.

I would love to read such a post! 

It's very humbling to see how much he already thought of, which we now call EA. 

Agreed — I think the Ramsey/Keynes-era Apostles would make an interesting case study of a 'proto-EA' community. 

Should EA Buy Distribution Rights for Foundational Books?

Another historical precedent

In 1820, James Mill seeks permission for a plan to print and circulate 1,000 copies of his Essay on Government, originally published as a Supplement to Napier's Encyclopaedia Britannica:

I have yet to speak to you about an application which has been made to me as to the article on Government, from certain persons, who think it calculated to disseminate very useful notions, and wish to give a stimulus to the circulation of them. Their proposal is, to print (not for sale, but gratis distribution) a thousand copies. I have refu

... (read more)
Ethics of existential risk

FWIW, and setting aside stylistic considerations for the Wiki, I dislike 'x-risk' as a term and avoid using it myself even in informal discussions. 

  • it's ambiguous between 'extinction' and 'existential', which is already a common confusion
  • it seems unserious and somewhat flippant (vaguely comic book/sci-fi vibes)
  • the 'x' prefix can denote the edgy, or sexual  (e.g. X Games; x-rated; Generation X?)
  • 'x' also often denotes an unknown value (e.g. in 'Cause X' — another abbreviation I dislike; or indeed Stefan's comment earlier in this thread)
Thanks for this comment. I was already aware of the first two downsides, and often lean away from the term for those reasons. But I hadn't considered the other two downsides, and they make sense to me, so this updates me towards more consistently avoiding the term. Out of interest, do you use "x-risk" in e.g. Slack threads, google doc comments, and conversations at lunch? I.e., in contexts that are not just informal but also private and two-way (so it's easier to notice if something has been misunderstood or left a bad impression)? I think by default I'd continue to do that myself.
Ethics of existential risk

I prefer this option to all others mentioned here. 

What are things everyone here should (maybe) read?

I also kind of think everyone should read at least one biography, in particular of people who have become scientifically, intellectually, culturally, or politically influential.

Some biographies I've enjoyed in this vein:

  • Frank Ramsey: A Sheer Excess of Powers
  • The Price of Peace: Money, Democracy, and the Life of John Maynard Keynes
  • Karl Marx: a Nineteenth-Century Life
Came here by searching for Frank Ramsey on the forum. I considered writing a post about the same biography you mentioned for the forum. It's very humbling to see how much he already thought of, which we now call EA. A related work I can recommend is "Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science"
Some learnings I had from forecasting in 2020

With regards to the AGI timeline, it's important to note that Metaculus' resolution criteria are quite different from a 'standard' interpretation of what would constitute AGI[1], (or human-level AI[2], superintelligence[3], transformative AI, etc.). It's also unclear what proportion of forecasters have read this fine print (interested to hear others' views on this), which further complicates interpretation.

For these purposes we will thus define "an artificial general intelligence" as a single unified software system that can satisfy the following crite

... (read more)
Agreed, I've been trying to help out a bit with Matt Barnett's new question here [] . Feedback period is still open, so chime in if you have ideas! I suspect most Metaculites are accustomed to paying attention to how a question's operationalization deviates from its intent FWIW. Personally, I find the Montezuma's revenge criterion quite important without which the question would be far from AGI. My intent with bringing up this question, was more to ask about how Linch thinks about the reliability of long-term predictions with no obvious frequentist-friendly track record to look at.
Has anyone gone into the 'High-Impact PA' path?

I work at FHI, as RA and project manager for Toby Ord/The Precipice (2018–20), and more recently as RA to Nick Bostrom (2020–). Prior to this, I spent 2 years in finance, where my role was effectively that of an RA (researching cement companies, rather than existential risk). All of the below is in reference to my time working with Toby.

Let me know if a longer post on being an RA would be useful, as this might motivate me to write it.


I think a lot of the impact can be captured in terms of being a multiplier[1] on their time, as discussed by Caroline ... (read more)

Thanks for this answer - I've shared links to it with several people, and will also link to it in my sequence on Improving the EA-Aligned Research Pipeline []. I've also now posted notes from a call with someone else who's an RA to a great researcher [] , who likewise thought this role was great for his learning.
Sorry about the late answer. I just wanted to say that I also upvoted your comment because I would be very interested in a longer piece on being an RA.

Matthew, thanks for your response! Very handy to have some names I might get in contact with, and this is turning out to be higher-impact than I thought. Can you say any more on how EA-specific your career capital might be?

I'd be very interested in a longer post on the subject!

Some thoughts on EA outreach to high schoolers

If there were more orgs doing this, there’d be the risk of abuse working with minors if in-person.

I think this deserves more than a brief mention. One of the two high school programs mentioned (ESPR) failed to safeguard students from someone later credibly accused of serious abuse, as detailed in CFAR's write-up:

Of the interactions CFAR had with Brent, we consider the decision to let him assist at ESPR—a program we helped run for high school students—to have been particularly unwise ... We do not believe any students were harmed. However, Brent did in

... (read more)
Max_Daniel's Shortform

Nice post. I’m reminded of this Bertrand Russell passage:

“all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man's achievement must inevitably be buried beneath the debris of a universe in ruins ... Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul's habitation henceforth be safely built.” —A Free Man’s Worship, 1903

I take Russell as arguing

... (read more)
The Importance of Unknown Existential Risks

Thanks — I agree with this, and should have made clearer that I didn't see my comment as undermining the thrust of Michael's argument, which I find quite convincing.

The Importance of Unknown Existential Risks

Great post!

But based on Rowe & Beard's survey (as well as Michael Aird's database of existential risk estimates), no other sources appear to have addressed the likelihood of unknown x-risks, which implies that most others do not give unknown risks serious consideration.

I don't think this is true. The Doomsday Argument literature (Carter, Leslie, Gott etc.) mostly considers the probability of extinction independently of any specific risks, so these authors' estimates implicitly involve an assessment of unknown risks. Lots of this writing was before

... (read more)
Thanks for this perspective! I've heard of the Doomsday Argument but I haven't read the literature. My understanding was that the majority belief is that the Doomsday Argument is wrong, we just haven't figured out why it's wrong. I didn't realize there was substantial literature on the problem, so I will need to do some reading! I think it is still accurate to claim that very few sources have considered the probability of unknown risks relative to known risks. I'm mainly basing this off the Rowe & Beard literature review, which is pretty comprehensive AFAIK. Leslie and Bostrom discuss unknown risks, but without addressing their relative probabilities (at least Bostrom doesn't, I don't have access to Leslie's book right now). If you know of any sources that address this that Rowe & Beard didn't cover, I'd be happy to hear about them.
I agree that the literature on the Doomsday Argument involves an implicit assessment of unknown risks, in the sense that any residual probability mass assigned to existential risk after deducting the known x-risks must fall under the unknown risks category. (Note that our object-level assessment of specific risks may cause us to update our prior general risk estimates derived from the Doomsday Argument.) Still, Michael's argument is not based on anthropic considerations, but on extrapolation from the rate of x-risk discovery. These are two very different reasons for revising our estimates of unknown x-risks, so it's important to keep them separate. (I don't think we disagree; I just thought this was worth highlighting.)
How Much Does New Research Inform Us About Existential Climate Risk?

Very useful comment — thanks.

Overall, I don't view this as especially good news ...

How do these tail values compare with your previous best guess?

I suppose they're roughly in line with my previous best guess. On the basis of the Annan and Hargreaves paper, on median BAU scenario the chance of >6K was about 1%. I think this is probably a bit too low because the estimates that ground that were not meant to systematically sample uncertainty about ECS. On the WCRS estimate, the chance of >6K is about 5%. (Annan and Hargreaves are co-authors on WCRS, so they have also updated).

One has to take account of uncertainty about emissions scenarios as well

Objections to Value-Alignment between Effective Altruists

[ii] Some queries to MacAskill’s Q&A show reverence here, (“I'm a longtime fan of all of your work, and of you personally. I just got your book and can't wait to read it.”, “You seem to have accomplished quite a lot for a young person (I think I read 28?). Were you always interested in doing the most good? At what age did you fully commit to that idea?”).

I share your concerns about fandom culture / guru worship in EA, and am glad to see it raised as a troubling feature of the community. I don’t think these examples are convincing, though. They stri

... (read more)
Should EA Buy Distribution Rights for Foundational Books?

Hayek's Road to Serfdom, and twentieth century neoliberalism more broadly, owes a lot of its success to this sort of promotion. The book was published in 1944 and initially quite successful, but print runs were limited by wartime paper rationing. In 1945, the US magazine Reader's Digest created a 20-page condensed version, and sold 1 million of these very cheaply (5¢ per copy). Anthony Fisher, who founded the IEA, came across Hayek's ideas through this edition.


Should EA Buy Distribution Rights for Foundational Books?

Great post — this is something EA should definitely be thinking more about as the canon of EA books grows and matures. Peter Singer has done it already, buying back the rights for TLYCS and distributing a free digital versions for its 10th anniversary.

I wonder whether most of the value of buying back rights could be captured by just buying books for people on request. A streamlined process for doing this could have pretty low overheads — it only takes a couple minutes to send someone a book via Amazon — and seems scalable. This should be eas

... (read more)

I've set up a system for buying books for people on request. If people are interested in using it you can read more and express interest here: 

Presumably, the trends in Goodreads ratings/reviews need to be interpreted in the context of the (considerable) growth in Goodreads' active users over time, and for that reason, linear-ish trends in the Goodreads data actually point towards more frontloaded growth profiles for sales/# of people who have read the books?
This is a good idea as well, though it could have the downside of preventing some of the more creative uses of community-owned digital distribution such as aiding translation and making excerpting easier. I think something closer to a Creative Commons license for digital versions would be best (though the publisher might not agree to that).
Ah yes, I forgot that we already did this for TLYCS. Would be good to see a retrospective on this :-) The EA Meta Fund gave $10,000 for this [] , which seems very worthwhile. Of course, this may not be the full cost, and this also covered some other things. I like that they included free audiobooks; we should probably do that too if we pursue this.

The key question here, is whether (and if so, to what degree) free download is a more effective means of distribution than regular book sales. So we should ask Peter Singer how the consumption of TLYCS changed with putting his book online. Or, if there are any other books that were distributed simultaneously across typical and unconventional means, then how many people did each distribution method reach?

X-risks to all life v. to humans

Welcome to the forum!

Further development of a mathematical model to realise how important timelines for re-evolution are.

Re-evolution timelines have another interesting effect on overall risk — all else equal, the more confident one is that intelligence will re-evolve, the more confident one should be that we will be able to build AGI,* which should increase one’s estimate of existential risk from AI.

So it seems that AI risk gets a twofold ‘boost’ from evidence for a speedy re-emergence of intelligent life:

  • Relative AI ris
... (read more)
Thanks for your comment Matthew. This is definitely an interesting effect which I had not considered. I wonder whether though the absolute AI risk may increase, it would not affect our actions as we would have no way to affect the development of AI by future intelligent life as we would be extinct. The only way I could think of to affect the risk of AI from future life would be to create an aligned AGI ourselves before humanity goes extinct!
How Much Leverage Should Altruists Use?

[disclosure: not an economist or investment professional]

emerging market bonds ... aren't (to my knowledge) distorted by the Fed buying huge amounts of bonds

This seems wrong — the spillover effects of 2008–13 QE on EM capital markets are fairly well-established (cf the 'Taper Tantrum' of 2013).

see e.g. Effects of US Quantitative Easing on Emerging Market Economies

"We find that an expansionary US QE shock has significant effects on financial variables in EMEs. It leads to an exchange rate appreciation, a reduction in l
... (read more)
EA Updates for April 2020

My top picks for April media relating to The Precipice:

How hot will it get?

I wasn't thinking about any implications like that really. My guess would be that the Kaya Identity isn't the right tool for thinking about either (i) extreme growth scenarios; or (ii) the fossil fuel endgame; and definitely not (iii) AI takeoff scenarios.

If I were more confident in the resource estimate, I would probably switch out the AI explosion scenario for a 'we burn all the fossil fuels' scenario. I'm not sure we can rule out the possibility that the actual limit is a few orders of magnitude more than 13.6PtC. IPCC cites Rog... (read more)

How hot will it get?

Also note that your estimate for emissions in the AI explosion scenario exceeds the highest estimates for how much fossil fuel there is left to burn. The upper bound given in IPCC AR5 (WG3.C7.p.525) is ~13.6 PtC (or ~5*10^16 tons CO2).

Awesome post!

2John G. Halstead2y
haha yes thanks matthew, that's a good spot! So, the thought is that we would have some non-trivial probability mass on burning all the fossil fuels if there is an AI explosion. My best guess would be that this makes working on AI better than working on marginal climate stuff but I'm not sure how to think about this yet
Toby Ord’s ‘The Precipice’ is published!

The audiobook will not include the endnotes. We really couldn't see any good way of doing this, unfortunately.

Toby is right that there's a huge amount of great stuff in there, particularly for those already more familiar with existential risk, so I would highly recommend getting your hands on a physical or ebook version (IMO ebook is the best format for endnotes, since they'll be hyperlinked).

2Jonas Vollmer2y
For those looking for the ebook, it's only available on the Canadian [] , German [], and Australian [] (cheapest) amazon pages (but not US / UK ones). (EDIT: Actually available on the UK store.)
Infinite Jest has them at the end of the audiobook in chaptered (clickable) segments, iirc
Thanks. Yes, I'll get the ebook then.
COVID-19 brief for friends and family

Thanks for writing this!

In the early stages, it will be doubling every week approximately

I’d be interested in pointers on how to interpret all the evidence on this:

  • until Jan 4: (Li et al) find 7.4 days
  • Jan 16–Jan 30: (Cheng & Shan) find ~1.8 days in China, before quarantine measures start kicking in.
  • Jan 20–Feb 6: (Muniz-Rodriguez et al) find 2.5 for Hubei [95%: 2.4–2.7], and other provinces ranging from 1.5 to 3.0 (with much wider error bars).
  • Eyeballing the most recent charts:
... (read more)
Yeah 7 days was intended to be a reasonable conservative guess. My actual guess is closer to 5.5. As you point out there are testing artifacts that point in both directions. Within china, test shortages, outside of china, slower testing roll out. I'm not an epi expert but I think the gold standard here would be to do something like time-series immune surveillance, where you randomly sample a large group of people from a pop and test them for an antibody reaction and/ or viral RNA, then do the same at intervals later. My guess is this is challenging because of the number of samples required to detect in most places, but maybe if you did this somewhere like italy you could pull it off (you get the population abundance as well). Its also the case that this isn't a fixed number, and you expect it to vary from population to population based on fraction of asymptomatic cases, social distancing, pop density etc. So I'm not sure we'll get a better number than 2-8 days in the short term, which is disconcerting given how big of a difference it makes to risk forecasts. I'd love to hear from anyone with more epi expertise!
Should Longtermists Mostly Think About Animals?
In fact, x-risks that eliminate human life, but leave animal life unaffected would generally be almost negligible in value to prevent compared to preventing x-risks to animals and improving their welfare.

Eliminating human life would lock in a very narrow set of futures for animals - something similar to the status quo (minus factory farming) until the Earth becomes uninhabitable. What reason is there to think the difference between these futures, and those we could expect if humanity continues to exist, would be negligible?

As far as we know, humans are th... (read more)

Hey, Yes, this was noted in the sentence following you quote and the paragraphs after this one. Note that if humans implemented extremely resilient interventions, human-focused x-risks might be of less value, but I broadly agree humanity's moral personhood is a good reason to think that x-risks impacting humans are valuable to work on. Reading through my conclusions again, I could have been a bit more clear on this.
UK donor-advised funds

Yeah - plus the opportunity cost of having it in cash. Looks like a non-starter.

UK donor-advised funds

Yeah, and how much you value the flexibility depends on what you expect to donate to.

EA Funds already allows you to donate to small/speculative projects, non-UK charities, etc, via a UK registered charity, so 'only ever donating to UK charities' is less restrictive than it sounds..

UK donor-advised funds

Yes that's what I meant - will edit for clarity

Gotcha. Thanks for the answer - I guess UK DAFs will only ever allow you to donate to UK charities, so maybe the lack of flexibility isn't worth it.
UK donor-advised funds

I spent an hour or so looking into this recently, and couldn't find any DAFs that were suitable for small donors. It's possible I missed one, though.

CAF offers a 'giving account', which is effectively a low-interest savings account. You can get immediate tax relief for deposits, but forgo returns, and can only donate to UK registered charities, so seems like a weak option:

FWIW my tentative conclusion was that the best option for savings... (read more)

Hm, the fees on the CAF account look pretty steep - seems like it eats 4% of everything you put in there up to £22k, and 1% thereafter.
If you don't donate in a given tax year you won't get the Gift Aid for that tax year at all, if I understand right - the tax relief is lost. The appeal of a DAF is you can claim the Gift Aid/tax deduction immediately but defer donating. I think you can also put securities in the DAF, so growth on it would be tax-free presumably.
The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*)

Sorry I should have disclaimed that I don't think this is a sensible strategy, and that people should approach party membership in good faith (for roughly the reasons Greg outlines above). Thanks for prompting me to clarify this.

My comment was just to point out that timing is an important factor in leverage-per-member.

Response to recent criticisms of EA "longtermist" thinking
The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes).

I find this surprising. Can you point to examples?

Section 9.3 here: [] (Disclaimer: Not my own views/criticism. I am just trying to steelman a Facebook post I read. I have not looked into the wider context of these views or people's current positions on these views.)
The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*)

This seems unlikely to be a useful tie-break in most cases, provided one can switch membership. UK party leadership elections are rarely contemporaneous [1] (unlike in the US), so the likelihood of a given party member being able to realise their leverage will generally differ by more than a factor of 4.5x at any given time.

[1] Conservatives: 1975, 1990, 1997, 2001, 2005, 2019

Labour: 1980, 1983, 1992, 1994, 2010, 2015, 2016, 2020

Hmm, but is it good or sustainable to repeatedly switch parties?
8 things I believe about climate change

[I broadly agree with above comment and OP]

Something I find missing from the discussion of CC as an indirect existential risk is what this means for prioritisation. It's often used implicitly to support CC-mitigation as a high-priority intervention. But in the case of geoengineering, funding for governance/safety is probably on the order of millions (at most), making it many orders of magnitude more neglected than CC-mitigation, and this is similar for targeted nuclear risk mitigation, reducing risk of great power war, etc.

This suggests that donors w... (read more)

Notes on 'Atomic Obsession' (2009)

+1 to all of this, and thanks for the other excellent comments.

There were, however, several accidents where the conventional explosives (that would trigger a nuclear detonation in intended use cases) in a nuclear weapon detonated (but where safety features prevented a nuclear detonation)

It's probably worse than that - there is at least one incident where critical safety features failed, and it was luck that prevented a nuclear explosion

From a declassified report on a 1961 incident, in which a bomber carrying two 4MT warheads broke up over North Caroli... (read more)

Thanks for mentioning this. I had meant to refer to this accident, but after spending 2 more minutes looking into got the impression that there is less consensus on what happened than I thought. Specifically, the Wikipedia article [] says: One of the Wikipedia references is a blog post by one of the authors mentioned above, with the title Goldsboro- 19 Steps Away from Detonation [] . Some quotes: I didn't attempt to understand the specific technical claims (not even if there is a dispute about technical facts, or just a different interpretation of how to describe the same facts in terms of how far away the bombs was from detonating), and so can't form my own view. Do you have any sense what source to trust here? In any case, my understanding is that nuclear weapons usually had many safety features, and that it's definitely true that one or a few of them failed in several instances.
List of EA-related email newsletters

I would add Future Perfect, and Policy.AI (CSET's new AI policy newsletter)

Are we living at the most influential time in history?
there are an expected 1 million centuries to come, and the natural prior on the claim that we’re in the most influential century ever is 1 in 1 million. This would be too low in one important way, namely that the number of future people is decreasing every century, so it’s much less likely that the final century will be more influential than the first century. But even if we restricted ourselves to a uniform prior over the first 10% of civilisation’s history, the prior would still be as low as 1 in 100,000. 

Half-baked thought: you might think that the very... (read more)

Which nuclear wars should worry us most?

I'm excited to read this series!

It would take a lot of nuclear weapons to produce nuclear winter climate effects, so if we’re particularly worried about nuclear winter, we should focus on nuclear exchange scenarios that would involve large nuclear arsenals.

I don't think this is quite right. Robock 2007 finds a severe nuclear winter effect from an exchange with just 100x 15kt bombs. AFAIK, the only country with an arsenal below that threshold today is North Korea, which would suggest that — on Robock's modelling at least—any bilateral exchang... (read more)

Thanks for raising this! You’re right — Robock 2007 [] does find that even a relatively small nuclear exchange would have devastating climate effects that would probably cause a famine. But my understanding (from both Robock’s paper and this report on the impact of a regional nuclear exchange on the global food supply []) is that a regional nuclear war, while horrible, would not cause a severe enough nuclear winter to risk human extinction. I’ll clarify in the post that I’m most worried about nuclear exchange scenarios that would lead to a nuclear winter severe enough as to pose an extinction risk.
Alignment Newsletter One Year Retrospective

General comment: Huge fan of the newsletter, and think it's awesome you're doing this sort of review. I should also caveat that I'm not an AIS researcher, so not exactly target audience.

My first guess is that there's significant value in someone maintaining an open, exhaustive database of AIS research. My main uncertainty is whether you are the best positioned to do this as things ramp up. It is plausible to me that an org with a safety team (e.g. DeepMind/OpenAI) is already doing this in-house, or planning to do so. It's less clea... (read more)

1Rohin Shah3y
Yeah, I agree. But there's also significant value in doing more AIS research, and I suspect that on the current margin for a full-time researcher (such as myself) it's better to do more AIS research compared to writing summaries of everything. Note that I do intend to keep adding all of the links to the database, it's the summaries that won't keep up. I'm 95% confident that no one is already doing this, and if they were seriously planning to do so I'd expect they would check in with me first. (I do know multiple people at all of these orgs.) You know, that would make sense as a thing to exist, but I suspect it does not. Regardless that's a good idea, I should make sure to check.
Long-Term Future Fund: April 2019 grant recommendations

Thanks for clarifying, that seems reasonable.

FWIW I share the view that sending all 4 volumes might not be optimal. I think I'd find it a nuisance to receive such a large/heavy item (~3 litres/~2kg by my estimate) unsolicited.

Long-Term Future Fund: April 2019 grant recommendations

$43/unit is still quite high - could you elaborate a bit more?

Hi Matthew,

1. $43/unit is an upper bound. While submitting an application, I was uncertain about the price of on-demand printing. My current best guess is that EGMO book sets will cost $34..40. I expect printing cost for IMO to be lower (economy of scale).

2. HPMOR is quite long (~2007 pages according to Goodreads). Each EGMO book set consists of 4 hardcover books.

3. There is an opportunity to trade-off money for prestige by printing only the first few chapters.