In the previous post, I introduced multiple methods for probabilistically modeling the evolution of civilization, which I’ve gradually been working on implementing in code. 

In the process, I’ve decided to tweak the simplest (‘cyclical’) model. I’ve removed the ‘survival’ state based on Luisa’s overall conclusion that we would almost certainly get through such a state (and people in the comments seem to view her as too pessimistic). 

I’ve also divided the ‘time of perils’ into our current state, which we will never return to, and all future times of perils. The thought is that one might prefer to eschew the complexity of the other models while still thinking that future ‘times of perils’ might substantially differ from ours in somewhat consistent ways:

  • They would all need to start over from the beginning of their modern age equivalent, therefore needing to navigate through their own equivalents to the Cuban missile crisis and other nuclear near misses just to get back to today’s technological equivalent.
  • They would all have had at least one cataclysm to learn from, and at least one civilisation’s technology to learn from - perhaps each civilisation will effectively consume the insights of its predecessor, or perhaps the technology of each civilisation will be similar enough that having more to look back on doesn't meaningfully improve insight.
  • They would all be missing almost all or entirely all of the fossil fuels our civilisation bootstrapped itself on. Perhaps other resources aren't used up in the same way (for example, while we might 'use up' phosphorus, this effectively just puts the atoms in less accessible places - a process which future civilisations might reduce, or even reverse).

The revised model allows the user to express some of this nuance while still being computationally simple enough to use interactively. It now looks like this:

 

The states are now:

Extinction: Extinction of whatever type of life you value any time between now and our sun’s death (i.e. any case where we've failed to develop interplanetary technology that lets us escape the event).

Preindustrial: Civilisation has regressed to pre-first-industrial-revolution-equivalent technology. 

Industrial: Civilisation has technology comparable to the first industrial revolution but does not yet have the technological capacity to do enough civilisational damage to regress to a previous state (e.g. nuclear weapons, biopandemics etc). A formal definition of industrial revolution technology is tricky but seems unlikely to dramatically affect probability estimates. In principle it could be something like 'kcals captured per capita go up more than 5x as much in a 100 year period as they had in any of the previous five 100-year periods.’

Current perils: Our current state, as of 1945, when we developed nuclear weaponry - what Carl Sagan called the ‘time of perils’.

Future perils: Human development has had a serious setback, and also has technology capable of threatening another serious contraction (such as nuclear weaponry, misaligned AI, etc.) but does not yet have multiple spatially isolated self-sustaining settlements. Arguably we could transition directly to this directly from our current state if there were a global shock sufficient to destroy much modern technology, but small enough to leave our nuclear arsenals and a decent fraction of industry intact or very quickly recoverable. 

Multiplanetary: Civilisation has progressed to having at least two spatially isolated self-sustaining settlements capable of continuing in an advanced enough technological state to produce further such settlements even if all the others disappeared. Each settlement must be physically isolated enough to be unaffected by at least one type of technological milestone catastrophe impacting the other two (e.g. another planet, a hollowed out asteroid or an extremely well-maintained bunker system). Although each settlement may face local threats, we might assume the risks to humanity as a whole, of either extinction or regression to reduced-technology-states, declines as the number of settlements increases.

Interstellar: Civilisation has progressed to having at least two self-sustaining colonies in different star systems, or gains existential security in some other way. 

For a more comprehensive explanation of these states, see the previous post. In the next post I'll introduce the implementations of these models that I've been working on.

Comments2


Sorted by Click to highlight new comments since:

Hi Arepo,

Extinction: Extinction of sentient life any time inclusively between now and our sun’s death.

Do you mean human extinction? In the last post, you had (emphasis mine):

Extinction: Extinction of sentient human descendants any time between now and our sun’s death.

Good catch, thanks. I'm not sure why or how I changed that - I've set it to match the previous post now.

[ETA] changed again for greater clarity

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3