Hide table of contents

This story is cross-posted from my blog, jacksonw.xyz.

While investigating a box of old family letters over thanksgiving, I unexpectedly came across a bizarre Christian-longtermist argument about the future duration of human civilization. The brief letter describes a calculation made by my great-great grandfather William Quinn in 1888, although it reads more like something out of a Ted Chiang science-fiction anthology. It's hard to tell if the estimate is serious — it was probably made with at least some of the same joking spirit as the modern-day thermodynamics story about why "heaven is hotter than hell". Either way, I bring you this letter now, on Christmas day, for your enjoyment and consideration.

The Calculation

My great-great grandfather starts from a line in Revelations that purports to give the volume of heaven (12,000 furlongs in every direction, which comes out to about fourteen quintillion cubic meters), then walks through a familiar fermi-estimate style of reasoning: if 25% of the volume is devoted to habitable rooms (as opposed to roads, palaces, gardens, etc), and if rooms are a plausible size, then there will be about  total rooms in heaven.

My ancestor then employs an extremely dubious approximation of human population growth — “we will suppose the world did and always will contain nine hundred million inhabitants”. (Really he ought to have known better, considering that he and his wife had a preposterous eleven children of their own!) Since generations are about 30 years apart, this gets you about three billion lives per century.

Finally, he compares the size of heaven to the population of earth. Heaven seems quite capacious — even if there were a hundred other worlds like ours (other planets, perhaps) all feeding into the same Heaven, and even if all these civilizations lasted one million years in duration, there would still be enough space for each inhabitant of Heaven to enjoy a personal mansion of over 100 rooms.

Reflections

Besides my surprise at seeing such a wild metaphysical estimate coming out of rural 1880s Louisiana, this gave me a moment of interesting perspective about similar efforts at long-term predictions made today. I find it inspiring and heartwarming to think that humans have always asked big philosophical questions of the sort that concern longtermist and utilitarian philosophies. But of course, it also gives me pause that the proposed answers to such questions have often been wildly wrong and hopelessly confused. With the progress of science and civilization, we certainly see much farther now than most of our great-great-grandfathers did, but our own ideas about the far future might still end up looking just as silly when compared to however reality actually turns out.

Joking as it may be, the letter also strikes me as perhaps reflecting the Christian attitude, described in this 80,000 Hours episode, that human extinction was impossible for various reasons — because God had promised not to destroy the world, or because mere mankind could never possess for itself the godlike power to destroy the world, or because specific future events had been promised to occur (how could Christ fulfill his promise of return if there was only a dead wasteland to return to?), or as here because Heaven was so capacious that it implied a million-year lifetime for civilization. Their confidence in Christian civilization’s “existential security” makes for a grim contrast with our own century, and the idea that we are standing at the precipice of a risky “time of perils”.

Below is a picture of the letter as I found it last month. I am sure that my great-great grandfather would have been very pleased to know that its ideas would be shared, one hundred and thirty-three years later, among a community of like-minded philosophers and thinkers.

Merry Christmas, and happy Solstice!

Comments4


Sorted by Click to highlight new comments since:

What a fun read!

This is great, thanks for sharing!

I found the "let's assume humanity remains at a constant population of 900 million" notion particularly interesting. On some level I still have this (obviously wrong) intuition that human knowledge about its history just grows continuously based on what happens at any given time. E.g. I would have implicitly assumed that a person living in 1888 must have known how the population numbers have developed over the preceding centuries. This is of course not necessarily the case for a whole bunch of reasons, but seeing that he wasn't even aware that population growth is a thing was a serious surprise (unless he was aware, but thought it was close enough to the maximum to be negligible in the long term?).

It's funny how he assumes a generation would span 31.125 years without giving any explanation for that really specific number. Maybe he had 8 children at this point in time, and took e.g. his average age during the birth of all of them?

And lastly, he as well as any readers of this letter would have greatly benefited of the scientific notation. Which makes me wonder what terrible inefficiencies in communication & encoding / expressing ideas we're suffering from today, without having any inkling that things could be better... :)

I think constant population assumption is honestly pulled out of thin air and is just to simplify calculations – not because he thinks it actually makes sense. What's much more relevant to his calculation is how long the world will last. Why assume that it will last one million years in total and not ten thousand?

It's also interesting that he assumes that everyone is going to heaven and doesn't even call out that assumption. Whether he was a universalist (believing everyone would go to heaven) or not, the fact that he fails to mention this assumption makes me question the seriousness of this letter. I wouldn't read too much into this letter as evidence of how naive we could be.

Pointing out more weirdnesses may by now be unnecessary to make the point, but I can't resist: the estimate also seems to equivocate between "number of people alive at any moment" and "number of people in each generation", as if the 900 million population was comprised of a single generation that fully replaced itself each 31.125 years. Numerically this only impacts the result by a factor of 3 or so, but it's perhaps another reason not to take it as a serious attempt :)

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
 ·  · 1m read
 · 
This is a linkpost for a new paper called Preparing for the Intelligence Explosion, by Will MacAskill and Fin Moorhouse. It sets the high-level agenda for the sort of work that Forethought is likely to focus on. Some of the areas in the paper that we expect to be of most interest to EA Forum or LessWrong readers are: * Section 3 finds that even without a software feedback loop (i.e. “recursive self-improvement”), even if scaling of compute completely stops in the near term, and even if the rate of algorithmic efficiency improvements slow, then we should still expect very rapid technological development — e.g. a century’s worth of progress in a decade — once AI meaningfully substitutes for human researchers. * A presentation, in section 4, of the sheer range of challenges that an intelligence explosion would pose, going well beyond the “standard” focuses of AI takeover risk and biorisk. * Discussion, in section 5, of when we can and can’t use the strategy of just waiting until we have aligned superintelligence and relying on it to solve some problem. * An overview, in section 6, of what we can do, today, to prepare for this range of challenges.  Here’s the abstract: > AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges.  > > These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making. > > We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are alig