Note: I'm crossposting this from the United States of Exception Substack with the author's permission. The author may not see or respond to comments on this post. I'm posting this because I thought it was interesting and relevant, and don't necessarily agree with any specific points made.


A good and wholesome K-strategist.

I am a climate change catastrophist, but I’m not like all the others. I don’t think climate change is going to wipe out all life on Earth (as 35% of Americans say they believe) or end the human race (as 31% believe). Nor do I think it’s going to end human life on Earth but that human beings will continue to exist somewhere else in the universe (which at least 4% of Americans would logically have to believe). Nevertheless, I think global warming is among the worst things in the world — if not #1 — and addressing it should be among our top priorities.

Friend of the blog Bentham's Bulldog argues that this is silly, because even though climate change is very bad, it’s not the worst thing ever. The worst thing ever is factory farming, and climate change “rounds down to zero” when compared to factory farming.

I disagree. I think there is a plausible case that climate change is orders of magnitude worse than factory farming. In fact, I think I can convince Bentham of this (that it’s plausible, not that it’s definitely true) by the end of the following sentence: Climate change creates conditions that favor r-selected over K-selected traits and species in most environments, and these effects can be expected to last for several million years.

I don’t know if I’ve already convinced him. For most people, that sentence is probably nonsense. But if you’re familiar at all with the concept of wild animal suffering, it should start to raise some alarm bells.

Biologists describe species’ reproductive strategies along a continuum of r-selection to K-selection, based on how a species trades off between quantity and quality of offspring. r-strategists reproduce in great numbers and invest little in each individual, hence each offspring has a very low chance of surviving to sexual maturity. K-strategists produce fewer offspring and invest a great deal of resources into raising each of them. Think “r” for rabbit and “K” for kangaroo. (Rabbits fuck like rabbits, and a mother kangaroo carries a joey in her pouch for months.)

r-strategists and more r-selected traits are usually advantaged when the environment is more precarious. Producing a large number of offspring improves the chance that at least some of them will survive to sexual maturity and pass their genes on to the next generation. More r-selected species also typically have shorter lifespans and reproduce faster after birth, which means they can more easily adapt to new conditions. K-strategists, on the other hand, are better suited for more stable environments where their offspring are more likely to survive and they don’t need to adapt quickly to changing conditions.

The reason that r-selection is so bad if you have values like Bentham’s and mine and you care about the hedonic well-being of individuals is that the average life of an r-strategist is extremely bad, and significantly worse than the average life of a K-strategist. For a species’ population to be stable in the long term, an average of only one offspring per parent needs to survive to adulthood and go on to reproduce. Hence, while species that produce few offspring should tend to have low premature mortality rates, most members of species that produce many offspring at a time die extremely painful deaths shortly after they’re brought into existence. Oscar Horta estimates that among a single population of one million Atlantic cod in the Gulf of Maine — who each spawn about 2 million eggs per reproductive cycle — the expected number of seconds of suffering among codlings who fail to survive is about 200 billion per year, or 6,338 suffering-years per year. Invertebrate populations are typically even worse off, since they also produce thousands or millions of offspring per cycle but their reproductive cycles are much shorter.

What makes climate change so bad is not just that it might wipe out a lot of wild animals (in itself, that would very plausibly, but not definitely, be a good thing), but that by making the natural environment more precarious, it would both wipe out a lot of wild animals and advantage the remaining r-strategists and r-selected traits. In fact, this is what the Intergovernmental Panel on Climate Change says climate change is already doing. Namely:

Biodiversity loss and degradation, damages to and transformation of ecosystems are already key risks for every region due to past global warming and will continue to escalate with every increment of global warming (very high confidence).

And specifically, among land animals:

3 to 14% of species assessed will likely face very high risk of extinction at global warming levels of 1.5°C, increasing up to 3 to 18% at 2°C, 3 to 29% at 3°C, 3 to 39% at 4°C, and 3 to 48% at 5°C. [The United Nations estimates that without further action, global temperatures will rise by as much as 3.1°C by 2100.]

Just the ~1.2°C in global warming we’ve observed so far has already contributed to a 73% decline in the population of wild vertebrates since 1970 and the extinction of up to 2.5% of vertebrate species. The invertebrate population has also likely declined, but the magnitude of the decline is unclear. In the near-to-medium term, it’s likely that both vertebrate and invertebrate populations will continue to fall as the environment becomes even more inhospitable. Throughout this period, more K-selected species will probably be more likely to go extinct than more r-selected species, as r-strategists can adapt better to a more variable environment. We should therefore expect both r- and K-strategists to become more r-selected, and r-strategists to make up an increasing share of the population of wild animals.

If global temperatures stabilize again in a few centuries, and the total biomass of wild animals returns to normal, nature will likely be populated disproportionately with r-strategists compared to what it would be if anthropogenic climate change had not happened. This will also likely persist for a very long time, as it has historically taken millions of years after a mass extinction for full species diversity to return.

There are an estimated 10 trillion (10¹³) vertebrate individuals on Earth, as well as 10 sextillion (10²²) invertebrates. If we assume conservatively (and just for illustrative purposes) that the biomass of 10% of the vertebrate population is converted to smaller-bodied animals — say, half as large — each of whom produces an extra 10 offspring per year who experience one day of suffering and then die, the number of extra suffering-years caused per year would be 55 billion, or more than the entire number of suffering-years caused by all land-based factory farms per year. If you accounted at all for how the reproductive strategies of invertebrates might change, the total would be mind-bogglingly bigger. But even if you just stick with vertebrates and assume the effect lasts one million years, the effect of climate change on wild animal suffering would be at least 55 quadrillion suffering-years, which is orders of magnitude greater than the amount of suffering that factory farming ever has and likely ever will produce.

This is simply an unfathomable amount of suffering; there’s basically nothing that comes close. Even if you think it’s a good thing that climate change is reducing wild animal populations in the near-to-medium term because wild animals live net-negative lives, the effect of reduced population is only likely to last a few hundred years until temperatures again stabilize. However, the likely mass extinction of K-strategists and the concomitant increase in r-selection might last for millions of years.

Now, you should be skeptical of everything I say here, since I’m not a biologist. Climate change might permanently reduce the biomass of wild animals, which would likely reduce total suffering even if it increases the share of r-strategists. AI might become godlike and help us re-engineer the biosphere to maximize utils. There might be some environments where a warmer climate actually favors K-selected traits. But I think the case I lay out above is at least plausible, and if you give it any non-negligible credence, you should probably agree that climate change deserves to be treated as a top social priority, either alongside or ahead of factory farming.

122

2
4
8

Reactions

2
4
8

More posts like this

Comments12


Sorted by Click to highlight new comments since:

Brian Tomasik considers more selection toward animals with faster life histories in his piece on the effects of climate change on wild animals. He seems to think it‘s not decisive (and ends up concluding that he’s basically 50–50 on the sign of the effects of climate change on overall animal suffering) for ~three reasons (paraphrasing Tomasik):

  • Some of the animals with slower life histories which get replaced are often carnivorous/omnivorous, which might mean climate change increases invertebrate populations.
  • Instability might also affect plants, which could lower net primary productivity and hence invertebrate populations.
  • Many of the “ultimate” life forms with fast life histories will be microorganisms that we don’t put much moral weight in.

I’d be curious for how you think the arguments in the above post should change Tomasik’s view, in light of these considerations. 

"However, the likely mass extinction of K-strategists and the concomitant increase in r-selection might last for millions of years."

I like learning about ecology and evolution, so personally I enjoy these kinds of thought experiments.  But in the real world, isn't it pretty unlikely that natural ecosystems will just keep humming along for another million years?  I would guess that within just the next few hundred years, human civilization will have grown in power to the point where it can do what it likes with natural ecosystems:
 

  • perhaps we bulldoze the earth's surface in order to cover it with solar panels, fusion power plants, and computronium?
  • perhaps we rip apart the entire earth for raw material to be used for the construction of a Dyson swarm?
  • more prosaically, maybe human civilization doesn't expand to the stars, but still expands enough (and in a chaotic, unsustainable way) such that most natural habitats are destroyed
  • perhaps there will have been a nuclear war (or some other similarly devastating event, like the creation of mirror life that devastates the biosphere)
  • perhaps we create unaligned superintelligent AI which turns the universe into paperclips
  • perhaps humanity grows in power but also becomes more responsible and sustainable, and we reverse global warming using abundant clean energy powering technologies like carbon air capture, assorted geoengineering techniques, etc
  • perhaps humanity attains a semi-utopian civilization, and we decide to extensively intervene in the natural world for the benefit of nonhuman animals
  • etc

Some of those scenarios might be dismissable as the kind of "silly sci-fi speculation" mentioned by the longtermist-style meme below.  But others seem pretty mundane, indeed "to be expected" even by the most conservative visions of the future.  To me, the million-year impact of things like climate change only seems relevant in scenarios where human civilization collapses pretty soon, but in a way that leaves Earth's biosphere largely intact (maybe if humans all died to a pandemic?).
 

Thanks for sharing! I would not be surprised if the effects of global warming on wild animals were larger than the suffering of farmed animals. However, it is super unclear whether wild animals have positive or negative lives, including r-selected ones. So I think it makes sense to prioritise learning more about the effects on wild animals, such as by donating to the Wild Animal Initiative (WAI), instead of betting a cooler world results in less animals with negative lives. More broadly, if climate change is super bad due to a specific problem (wild animal welfare, water scarcity, conflict, soil erosion, or other), I believe it is better to target that problem more directly/explicitly and without constraints instead of via decreasing greenhouse gas (GHG) emissions, which narrows the number of available interventions a lot.

I think solutions that solve all of climate change may be more tractable than wide-reaching solutions for factory farming / animal suffering.


Climate change resolutions that cost $1 trillion a year or less & don’t require widespread political change…

-SO2 Injection

-Olivine Rock Weathering

-Continued Steep Cost Declines in Renewables & Batteries

-Abundant Carbon Neutral Synthetic Gas

 

How to Solve Climate Change https://unchartedterritories.tomaspueyo.com/p/we-can-already-stop-climate-change 

Current SO2 Credits https://unchartedterritories.tomaspueyo.com/p/so2-injection

Donate
https://makesunsets.com/products/join-the-next-balloon-launch-and-cool-the-planet 

Olivine Rock Weathering https://worksinprogress.co/issue/olivine-weathering/ 

Steep Renewables & Batteries Cost Declines https://caseyhandmer.wordpress.com/2024/11/09/solar-and-batteries-for-generic-use-cases/ 

Abundant Carbon Neutral Synthetic Gas https://caseyhandmer.wordpress.com/2024/06/24/how-terraform-navigated-the-idea-maze/ 

I crossposted this because it was an interesting read, and it makes an argument that I've never heard before. I'd be curious if anyone with more expertise has takes on this! :)

👋 Looks interesting! What do you think about having the title reflect its origins, e.g. "linkpost: Climate Change Is Worse Than Factory Farming", or "suggested reading: [X]" or something like that?

At a glance right now, the UX here looks like the EA Forum Team is itself endorsing this pretty radical position. (FWIW I appreciate the drive to cross-post interesting material/the broader drive to improve the forum experience, I have been thinking about your other post a bit lately and hope to respond soon)

Gosh yeah that's reasonable. I was hoping to avoid making another team account to do this kind of crossposting but that's probably the best solution. I guess it's time for SummaryBot to get a sibling, maybe LinkpostBot?

I think its best to post this under your own account.

Interesting, why's that? :)

For context: I want the Forum team to be able to do more active crossposting in the future, so it seems reasonable to have a sort of "placeholder" account for when the author of the original piece doesn't have a Forum account. Personally when I see a linkpost, I generally assume that the author here is also the original author (outside of obvious cases like a NYT article link), and it's kinda confusing when that's not the case (I'm more used to this now but it was extra confusing when I was a new user). I also personally feel kinda weird getting karma for just linking to someone else's work, and probably don't want my Forum profile to mostly be links to other articles.

On the other hand, I do want users to feel free to linkpost external articles that they didn't write, especially if they add their own commentary, or even just make some editorial decisions on which parts to quote. (That's why I was fine with crossposting this using my own account, for example.)

Caveat: I consider these minor issues, I hope I don't come across as too accusatory.

Interesting, why's that? :)

It seems that the reason for cross-posting was that you personally found it interesting. If you use the EA forum team account, it sounds a bit like an "official" endorsement, and makes the Forum Team less neutral.

Even if you use another account name (eg "selected linkposts") that is run by the Forum Team, I think there should be some explanation how those linkposts are selected, otherwise it seems like arbitrarily privileging some stuff over other stuff.

A "LinkpostBot" account would be good if the cross-posting is automated (e.g. every ACX article who mentions Effective Altruism).

I also personally feel kinda weird getting karma for just linking to someone else's work

I think its fine to gain Karma by virtue of linkposting and being an active forum member, I will not be bothered by it and I think you should not worry about that (although i can understand that it might feel uncomfortable to you). Other people are also allowed to link-post.

Personally when I see a linkpost, I generally assume that the author here is also the original author

I think starting the title with [linkpost] fixes that issue.

Thanks! I basically landed on using my personal account since most people seem to prefer that. I suppose I'll accept the karma if that's what everyone else wants! :P

Honestly I think it's somewhat misleading for me to post with my account because I am posting this in my capacity as part of the Forum Team, even though I'm still an individual making a judgement. It's like when I get a marketing email signed by "Liz" — probably this is a real person writing the email, but it's still more the voice of the company than of an individual, so it feels a bit misleading to say it's from "Liz". On the other hand, I guess all my Forum content has been in my capacity as part of the Forum Team so no reason to change that now! :)

(I also agree with your points about "LinkpostBot" feeling like it should be an automation, and that having a team account for linkposting runs the risk of making those seem privileged.)

I think that's a good idea -- or just post as yourself (?)

(Ofc I think I and others understand that things are in flux and this is all NBD)

Curated and popular this week
 ·  · 12m read
 · 
Economic growth is a unique field, because it is relevant to both the global development side of EA and the AI side of EA. Global development policy can be informed by models that offer helpful diagnostics into the drivers of growth, while growth models can also inform us about how AI progress will affect society. My friend asked me to create a growth theory reading list for an average EA who is interested in applying growth theory to EA concerns. This is my list. (It's shorter and more balanced between AI/GHD than this list) I hope it helps anyone who wants to dig into growth questions themselves. These papers require a fair amount of mathematical maturity. If you don't feel confident about your math, I encourage you to start with Jones 2016 to get a really strong grounding in the facts of growth, with some explanations in words for how growth economists think about fitting them into theories. Basics of growth These two papers cover the foundations of growth theory. They aren't strictly essential for understanding the other papers, but they're helpful and likely where you should start if you have no background in growth. Jones 2016 Sociologically, growth theory is all about finding facts that beg to be explained. For half a century, growth theory was almost singularly oriented around explaining the "Kaldor facts" of growth. These facts organize what theories are entertained, even though they cannot actually validate a theory – after all, a totally incorrect theory could arrive at the right answer by chance. In this way, growth theorists are engaged in detective work; they try to piece together the stories that make sense given the facts, making leaps when they have to. This places the facts of growth squarely in the center of theorizing, and Jones 2016 is the most comprehensive treatment of those facts, with accessible descriptions of how growth models try to represent those facts. You will notice that I recommend more than a few papers by Chad Jones in this
LintzA
 ·  · 15m read
 · 
Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achieve 25% on its Frontier Math
Omnizoid
 ·  · 5m read
 · 
Edit 1/29: Funding is back, baby!  Crossposted from my blog.   (This could end up being the most important thing I’ve ever written. Please like and restack it—if you have a big blog, please write about it). A mother holds her sick baby to her chest. She knows he doesn’t have long to live. She hears him coughing—those body-wracking coughs—that expel mucus and phlegm, leaving him desperately gasping for air. He is just a few months old. And yet that’s how old he will be when he dies. The aforementioned scene is likely to become increasingly common in the coming years. Fortunately, there is still hope. Trump recently signed an executive order shutting off almost all foreign aid. Most terrifyingly, this included shutting off the PEPFAR program—the single most successful foreign aid program in my lifetime. PEPFAR provides treatment and prevention of HIV and AIDS—it has saved about 25 million people since its implementation in 2001, despite only taking less than 0.1% of the federal budget. Every single day that it is operative, PEPFAR supports: > * More than 222,000 people on treatment in the program collecting ARVs to stay healthy; > * More than 224,000 HIV tests, newly diagnosing 4,374 people with HIV – 10% of whom are pregnant women attending antenatal clinic visits; > * Services for 17,695 orphans and vulnerable children impacted by HIV; > * 7,163 cervical cancer screenings, newly diagnosing 363 women with cervical cancer or pre-cancerous lesions, and treating 324 women with positive cervical cancer results; > * Care and support for 3,618 women experiencing gender-based violence, including 779 women who experienced sexual violence. The most important thing PEPFAR does is provide life-saving anti-retroviral treatments to millions of victims of HIV. More than 20 million people living with HIV globally depend on daily anti-retrovirals, including over half a million children. These children, facing a deadly illness in desperately poor countries, are now going