Pronouns: she/her or they/them.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.
I write on Substack, and used to write on Medium.
I hope that moral progress on animal rights/animal welfare will take much less than 1,000 years to achieve a transformative change, but I empathize with your disheartened feeling about how slow progress has been. Something taking centuries to happen is slow by human (or animal) standards but relatively fast within the timescales that longtermism often thinks about.
The only intervention discussed in relation to the far future at that first link is existential risk mitigation, which indeed has been a topic discussed within the EA community for a long time. My point is that if such discussions were happening as early as 2013 and, indeed, even earlier than that, and even before effective altruism existed, then that part of longtermism is not a new idea. (And none of the longtermist interventions that have been proposed other than those relating to existential risk are novel, realistic, important, and genuinely motivated by longtermism.) Whether people care if longtermism is a new idea or not is, I guess, another matter.
I agree with your first paragraph (and I think we probably agree on a lot!), but in your second paragraph, you link to a Nick Bostrom paper from 2003, which is 14 years before the term "longtermism" was coined.
I think, independently from anything to do with the term "longtermism", there is plenty you could criticize in Bostrom's work, such as being overly complicated or outlandish, despite there being a core of truth in there somewhere.
But that's a point about Bostrom's work that long predates the term "longtermism", not a point about whether coining and promoting that term was a good idea or not.
My biggest takeaway from the comments so far is that many/most of the commenters don't care whether longtermism is a novel idea, or at least care about that much less than I do. I never really thought about that before — I never really thought that would be the response.
I guess it's fine to not care about that. The novelty (or lack thereof) of longtermism matters to me because it sure seems like a lot of people in EA have been talking and acting like it's a novel idea. I care about "truth in advertising" even as I also care about whether something is a good idea or not.
I think the existential risk/global catastrophic risk work that longtermism-under-the-name-"longtermism" builds on is overall good and important, and most likely quite actionable (e.g. detect those asteroids, NASA!), even though there may be major errors in it, as well as other flaws and problems, such as a lot of quite weird and farfetched stuff in Nick Bostrom's work in particular. (I think the idea in Bostrom's original 2002 paper that the universe is a simulation that might get shut down is roughly on par with supernatural ideas about the apocalypse like the Rapture or Ragnarök. I think it's strange to find it in what's trying to be serious scholarship, and it makes that scholarship less serious.)
The fundamental point about risk is quite simple and intuitive: 1) humans are biased toward ignoring low-probability events that could have huge consequences and 2) when thinking about such events, including those that could end the world, we should think not just about the people alive today, the world today, but the consequences for the world for the rest of time and all future generations.
That's a nearly perfect argument! It's also something you can explain in under a minute to anyone, and they'll intuitively get it, and probably agree or at least be sympathetic.
As I recall, when NASA did a survey of or public consultation with the American public, the public's desire for NASA to work on asteroid defense was overwhelmingly higher than NASA expected. I think this is good evidence that the general public finds arguments of this form persuasive and intuitive. And I believe when NASA learned that the public cared so much, that led NASA to prioritize asteroid defense much more than it had been previously.
I don't have any data on this right now (I could look it up), but to the extent that people — especially outside the U.S. — haven't turned covid-19 into a politically polarized, partisan issue and haven't bought into conspiracy theories or pseudoscience/non-credible science, I imagine that, among people who thought the virus was real, the threat was real, and the alarmed response was appropriate, there would be strong support for pandemic preparedness. This isn't rocket science — or, with asteroid defense, it literally is, but understanding why we want to launch the rockets isn't rocket science.
One of the more excellent comments I've ever read on the EA Forum. Perceptive and nimbly expressed. Thank you.
people 100 years ago that did boring things focused on the current world did more for us than people dreaming of post-work utopias.
Very well said!
To that extent, the focus on x-risk seems quite reasonable: still existing is something we actually can reasonably believe will be valued by humans in a million years time
I totally agree. To be clear, I support mitigation of existential risks, global catastrophic risks, and all sorts of low-probability, high-impact risks, including those on the scale of another pandemic like covid-19 or large volcanic eruptions. I love NASA's NEO Surveyor.
I think, though, we may need to draw a line between acts of nature — asteroids, volcanoes, and natural pandemics — and acts of humankind — nuclear war, bioterror, etc.
The difference between nature and humankind is that nature does not respond to what we do. Asteroids don't try to foul our defenses. In a sense, viruses "try" to beat our vaccines and so on, but that's already baked-in to our idea what viruses have been for a long time, and it isn't the same thing as what humans do when we're in an adversarial relationship with them.
I certainly think we should still try our absolute best to protect humanity against acts of humankind like nuclear war and bioterror. But it's much harder, if not outright impossible, to get good statistical evidence for the probability of events that depend on what humans decide to do, using all their intelligence and creativity, as opposed to a natural phenomenon like an asteroid or a virus (or a volcano). We might need to draw a line between nature and humankind and say that rigorous, cost-effectiveness estimates on the other side of the line may not be possible, and at the very least are much more speculative and uncertain.
I don't think that's an argument against doing a lot about them, but it's an important point nonetheless.
With AI, the uncertainty that exists with nuclear war and bioterror is cranked up to 11. We're talking about fundamentally new technologies based on, most likely, new science yet to be discovered, and even new theoretical concepts in the science yet to be developed, if not an outright new theoretical paradigm. This is quite different from bombs that already exist and have been exploded before. With bioterror, we already know natural viruses are quite dangerous (e.g. just releasing natural smallpox could be bad), and I believe there have been proofs-of-concept of techniques bioterrorists could use. So, it's more speculative, but not all that speculative.
Imagine this idea: someday in the hazy future, we invent the first AGI. Little do we know, this AGI is perfectly "friendly", aligned, safe, benevolent, wise, nonviolent, and so on. It will be a wonderful ally to humanity, like Data from Star Trek or Samantha from Her. Until... we decide to apply the alignment techniques we've been developing to it. Oh no, what a mistake! Our alignment techniques actually do the opposite of what we wanted, and turn a friendly, aligned, safe, benevolent AI into an unfriendly, misaligned, unsafe, rogue, dangerous AI. We caused the very disaster we were trying to prevent!
How likely is such a scenario? There's no way to know. We simply have no idea, and we have no way of finding out.
This helps illustrate, I hope, one of the problems (out of multiple distinct problems) with precautionary arguments about AGI, particularly if back-of-the-envelope cost-effectiveness calculations are used to justify spending on precautionary research. There is no completely agnostic way to reduce risk. You have to make certain technical and scientific assumptions to justify funding AI alignment research. And how well-thought-out, or well-studied, or well-scrutinized are those assumptions?
Wow, this makes me feel old, haha! (Feeling old feels much better than I thought it would. It's good to be alive.)
There was a lot of scholarship on existential risks and global catastrophic risks going back to the 2000s. There was Nick Bostrom and the Future of Humanity Institute at Oxford, the Global Catastrophic Risks Conference (e.g. I love this talk from the 2008 conference), the Global Catastrophic Risks anthology published in 2008, and so on. So, existential risk/global catastrophic risk was an idea about which there had already been a lot of study even going back about a decade before the coining of "longtermism". Imagine my disappointment when I hear about this hot new idea called longtermism — I love hot new ideas! — and it just turns out to be rewarmed existential risk.
I agree that it might be perfectly fine to re-brand old, good ideas, and give them a fresh coat of paint. Sure, go for it. But I'm just asking for a little truth in advertising here.
I mentioned that you often see journalists or other people not intimately acquainted with effective altruism conflate ideas like longtermism and transhumanism (or related ideas about futuristic technologies). This is a forgivable mistake because people in effective altruism often conflate them too.
If you think superhuman AGI is 90% likely within 30 years, or whatever, then obviously that will impact everyone alive on Earth today who is lucky (or unlucky) enough to live until it arrives, plus all the children who will be born between now and then. Longtermists might think the moral value of the far future makes this even more important. But, in practice, it seems like people who aren't longtermists who also think superhuman AGI is 90% likely within 30 years are equally concerned about the AI thing. So, is that really longtermist?