I'm a quantitative biologist with a PhD in evolutionary theory and currently working in the field of microbiome and metagenomics. https://mikemc.cc


AMA: Ajeya Cotra, researcher at Open Phil

Hi Ajeya, thanks for doing this and for your recent 80K interview! I'm trying to understand what assumptions are needed for the argument you raise in the podcast discussion on fairness agreements that a longtermist worldview should have been willing to trade up all its influence for ever-larger potential universe. There are two points I was wondering if you could comment on if/how these align with your argument.

  1. My intuition says that the argument requires a prior probability distribution on universe size that has an infinite expectation, rather than just a prior with non-zero probability on all possible universe sizes with a finite expectation (like a power-law distribution with k > 2).

  2. But then I figured that even in a universe that was literally infinite but had a non-zero density of value-maximizing civilizations, the amount of influence over that infinite value that any one civilization or organization has might still be finite. So I'm wondering if what is needed to be willing to trade up for influence over ever larger universes is actually something like the expectation E[V/n] being infinite, where V = total potential value in universe and n = number of value-maximizing civilizations.

CHOICE - Creating a memorable acronym for EA principles

I have very little skin in the game here, as I don't personally have a strong desire for an acronym...but my 2 cents are that "Reasoning carefully" can be shortened to "Reasoning" (or "Reason") for this purpose with no loss - the "careful" part is implied. And I think I identify more with the idea of using careful reasoning than rationality. "Reason(ing)" also matches an existing short definition of EA as "Using reason and evidence to do the most good" (currently the page title for effectivealtruism.org)

Theory Of Change As A Hypothesis: Choosing A High-Impact Path When You’re Uncertain

Thanks for the post! This is just the type of thinking I wanted to do this morning, and I'm finding it and the spreadsheet template a useful motivator.

Crucial questions for longtermists

Thanks for your response and the link to your newer post and the Ord and Hanson refs. I'll just add a thought I had while reading

This is why I explicitly noted that here I was using MVP in a sense focused only on genetic diversity. To touch on the other "aspects" of MVP, I also have "What population size is required for economic specialisation, technological development, etc.?"

It seems fine to me for people to also use MVP in a sense referring to all-things-considered ability to survive, or in a sense focused only on e.g. economic specialisation...

This all makes sense, but sounds to me like to be at risk of leaving out the population/conservation biology perspective (beyond genetic considerations). A large part of what motivated me to write my original post is that I do think it is indeed valuable to use frameworks from population and conservation biology to study human extinction risk - but it is important to include all factors identified in those fields as being important; namely, environmental and demographic stochasticity, as well as habitat fragmentation and degradation, which could pose much greater risks than inbreeding and genetic drift.

Crucial questions for longtermists

Thanks for writing this post! I enjoyed looking over these, many of which I have also been puzzling about.

What’s the minimum viable human population (from the perspective of genetic diversity)?

After seeing this question picked up here I thought I would share some quick thoughts from the perspective of a person with a population biology/evolution background. I think this is a reasonable question to ask, but I suspect is not as important as the other factors that go into the broader question of what is the minimum population size from which humanity is likely to recover, period. Genetics are just one factor and probably not the most important when we consider the probability of recovery after a severe drop in global population.

Suppose that after some catastrophic event the population of humanity has suddenly dropped to a much smaller and more fragmented global population, e.g. 10000 individuals scattered in ~100 groups of 100 each across the globe. While the population size is small, it will be particularly susceptible to going extinct due to random fluctuations in population size. The population size could remain stationary or gradually decline, until eventually a random event causes extinction. Or it could start increasing, until eventually it is large enough to be robust to extinction from a random event.

The idea of a minimum viable population size (MVP) from a purely genetic perspective is that, since small populations are predicted to have lower average genetic fitness due to an increase in the expression of recessive deleterious mutations ("inbreeding depression"), an increased fixation of deleterious mutations in the population, or a lack of genetic variation that would allow adaptation to environment, there is in theory a population size small enough where a population would decline and go extinct due to low genetic fitness.

But in reality, the population seems more likely to go extinct because of poor environmental conditions, random environmental fluctuations, loss of cultural knowledge (which, like genetic variation, goes down in small populations), or lack of physical goods and technology, none of which have much to do with genetic variation.

Another way in which the concept of a MVP is too simplistic is that it is defined with respect to a genetic "equilibrium" - it assumes that conditions have been stable enough that there is a constant level of genetic variation in the population. However, after a sudden population decline, we would be far from equilibrium - we would still have lots of genetic variation from the time the population was large. This variation would start to decay, but as different local populations become fixed for different variants, much of this variation would be maintained at the global level and could be converted back into local variation by small amounts of migration. Such considerations are not usually included in MVP considerations. (Some collaborators and I have written about this last point at it relates to conserving endangered species here)

Perhaps we should keep the term "minimum viable population size" but use a broader definition based on likelihood to survive, period. I see that Wikipedia uses a broad definition that includes extinction due to demographic and environmental stochasticity, but often MVP is used as in the OP to refer just to extinction due to genetic reasons, so it is important to clarify terms.

What posts do you want someone to write?

I'd be really interested in reading an updated post that makes the case for there being an especially high (e.g. >10%) probability that AI alignment problems will lead to existentially bad outcomes.

My understanding is that Toby Ord does just this in his new book The Precipice (his new AI x-risk estimate is also discussed in his recent 80K podcast interview about the book), though it would still be good to have others weigh in.

Quantifying lives saved by individual actions against COVID-19

This version that has been making the rounds on twitter makes the point even plainer: Flattening the pandemic curve source

The syntax for embedding images is ![alt text](url). For this and other forum formatting issues, try googling along the lines of "markdown insert image" or "markdown cheatsheet" (still what I do despite using markdown regularly)

Load More