Hide table of contents

[A chapter in the new general-audience memoir+ Losing My Religions. Now available for purchase or free download. Thanks to my many friends and enemies for their feedback, wanted and not.] 
 

And all the answers that I started with

Turned out questions in the end

–Alison Krauss, “Gravity”


{REDACTED} 

Longtermists are driven by the belief that summing up all the future joy from (hopefully sentient, but how could we ever know for sure?) hedonistic robots vastly and absolutely swamps any concerns of the moment.

Our biggest bias?

As discussed in [previous chapter] I don’t believe that someone’s happiness always offsets someone else’s suffering. Furthermore, not only do I think existing suffering trumps future possible pleasure, I also don’t believe that humanity’s continued existence is a self-evident good, nor should it be a postulate for any ethical calculation. Humanity’s continued existence does not seem to be a de facto good thing. I know that Effective Altruists (EAs) tend to be well-off humans who like and thus value existence, but I think this value is not inherent. Rather, it is just a bias, just an intuition that makes EAs assume that continued existence is unquestionably a good thing. Watch The Killing Fields and read Night if you doubt that this is actually an open question.

Pascal’s Murder

I understand expected values, but think about what these longtermist calculations say: a tiny chance of lowering existential risk (a vanishingly small probability of improving the likelihood that quadzillions of happy robots will take over the universe) is more important than, say, stopping something like the Holocaust. Seriously. If a longtermist was alive in 1938 and knew what was going on in Nazi Germany, they would turn down the opportunity to influence public opinion and policy: “An asteroid might hit Earth someday. The numbers prove we must focus on that.”  The expected values are clear: any chance that studying asteroids might allow for quadzillions of robots vastly swamps any impact on concentration camps.

By the way, longtermists would have to look fondly on WW II. It was never an extinction threat, and it vastly accelerated technological progress, especially rocketry and computation.

Over at Vox, Dylan Matthews’ article, “I spent a weekend at Google talking with nerds about charity. I came away … worried” is a pretty good starting point for understanding this discussion. Excerpts:

The common response I got to this was, “Yes, sure, but even if there’s a very, very, very small likelihood of us decreasing AI [artificial intelligence] risk, that still trumps global poverty, because infinitesimally increasing the odds that 10^52 people in the future exist saves way more lives than poverty reduction ever could.”

The problem is that you could use this logic to defend just about anything. Imagine that a wizard showed up and said, “Humans are about to go extinct unless you give me $10 to cast a magical spell.” Even if you only think there’s a, say, 0.00000000000000001 percent chance that he's right, you should still, under this reasoning, give him the $10, because the expected value is that you're saving 10^32 lives. Bostrom calls this scenario “Pascal’s Mugging,” and it’s a huge problem for anyone trying to defend efforts to reduce human risk of extinction to the exclusion of anything else. 

[U]ltimately you have to stop being meta ... if you take meta-charity too far, you get a movement that’s really good at expanding itself but not necessarily good at actually helping people [or other animals -ed].

Clueless yet Certain

Furthermore, no one can know if the impact of their longtermist efforts will even be positive. This is called sign-uncertainty (aka cluelessness). We don’t and can’t know if our actions aimed at the long-term future will have positive or negative repercussions. 

There are plenty of examples, but one involves work on AI. It is entirely possible that by trying to reign in / slow down the development of AI in the United States (e.g., to force researchers to stop and try to address the alignment problem), an unfettered totalitarian AI from China could be first and pre-empt every other attempt. Ooops.

Another example: EAs talking about the threat of an engineered virus (ala Margaret Atwood’s fantastic Oryx and Crake) might be what gives real-world Crake his idea! Defense always has to be perfect; it just takes one person who thinks humanity needs a reset to get lucky for everything to (allegedly) go south.

Outsourcing to Berger

Don’t take my word for it. Alexander Berger of the Open Philanthropy Project covered this well in his 80,000 Hours podcast:

I think it makes you want to just say wow, this is all really complicated and I should bring a lot of uncertainty and modesty to it. ... I think the more you keep considering these deeper levels of philosophy, these deeper levels of uncertainty about the nature of the world, the more you just feel like you’re on extremely unstable ground about everything. ... my life could totally turn out to cause great harm to others due to the complicated, chaotic nature of the universe in spite of my best intentions. ... I think it is true that we cannot in any way predict the impacts of our actions. And if you’re a utilitarian, that’s a very odd, scary, complicated thought. 

But I think that in some sense, basically ignoring it and living your life like you are able to preserve your everyday normal moral concerns and intuitions to me seems actually basically correct.

I think the EA community probably comes across as wildly overconfident about this stuff a lot of the time, because it’s like we’ve discovered these deep moral truths, then it’s like, “Wow, we have no idea.” I think we are all really very much – including me – naïve and ignorant about what impact we will have in the future.

I’m going to rely on my everyday moral intuition that saving lives is good ... I think it’s maximizable, I think if everybody followed it, it would be good.

And from his interview with The Browser:

I’m not prepared to wait. The ethos of the Global Health and Wellbeing team is a bias to improving the world in concrete actionable ways as opposed to overthinking it or trying so hard to optimize that it becomes an obstacle to action.  We feel deep, profound uncertainty about a lot of things, but we have a commitment to not let that prevent us from acting.

I think there are a lot of ways in which the world is more chaotic than [we think]. [S]ometimes trying to be clever by one extra step can be worse than just using common sense.

Awesome.

It’s funny ’cuz its not funny

When people think they have the answer, and it just happens to be their math, sometimes sarcasm works best. 

I have no qualms with looking at an issue’s scale, negletedness, and tractability. (A calculation like this is what led to One Step for Animals.) Taking this literally and as the only consideration, however, leads to D0TheMath’s “Every moment of an electron’s existence is suffering.” Excerpts:

Scale: If we think there is only a 1% chance of panpsychism being true (the lowest possible estimate on prediction websites such as Metaculus, so highly conservative), then this still amounts to at least 10^78 electrons impacted in expectation.

Neglectedness: Basically nobody thinks about electrons, except chemists, physicists, and computer engineers. And they only think about what electrons can do for them, not what they can do for the electrons. This amounts to a moral travesty far larger than factory farms.

Tractability: It is tremendously easy to affect electrons, as shown by recent advances in computer technology, based solely on the manipulation of electrons inside wires.

This means every moment of an electron’s existence is pain, and multiplying out this pain by an expected 10^78 produces astronomical levels of expected suffering.

This is funny (April Fools!), and can, of course, be picked apart. (As can everything!) But it is very close to how some EA’s think! (And some people really do believe in panpsychism. Not funny.) I knew one EA who stopped donating to animal issues but instead donated to Christian missionaries. There may be only a small chance they are right about god, but if they are, the payoff for every saved soul is literally infinite! He actually put his money down on Pascal’s Wager!

You may be right. I may be crazy. But it just may be a lunatic you're looking for.

I don’t know that I’m right; as I mentioned [in previous chapters], I've changed my mind many times before. I understand smart people think I’m entirely mistaken. But in addition to cluelessness, I would at least like longtermists to regularly and overtly admit the opportunity costs; e.g. choosing to write 80,000 essays about 80,000 years in the future means you are actively choosing to not help individuals who are suffering terribly right now.

You might wonder why I continue to flog this issue, blogging about it regularly (1, 2, 3, 4). It is because I am continually saddened that, in a world filled with so much acute and unnecessary suffering, so many very smart people dedicate their 80,000 hour career trying to one-up each other’s expected value.

I welcome our robot overlords, and you should, too.

PS: The day after I finished this chapter, an essay by OPP’s Holden Karnofsky landed in my inbox: “AI Could Defeat All Of Us Combined.”

My first reaction was: “Good.” 

He is worried about “the alignment problem,” that the artificial intelligences we create might not share our values.

Holden writes:

By “defeat,” I don’t mean “subtly manipulate us” or “make us less informed” or something like that – I mean a literal “defeat” in the sense that we could all be killed, enslaved or forcibly contained.

Um, hello? Humans enslave, forcibly contain, and kill billions of fellow sentient (but “inferior”) beings. So if a “superior” AI actually did share human values, it seems like they would kill, enslave, and forcibly contain us. 

Holden, like almost every other EA and longtermist, simply assumes that humanity shouldn’t be “defeated.” Rarely does anyone note that it is possible, even likely, that on net, things would be much better if AIs did replace us.  

The closest Holden comes is when he addresses objections:

Isn’t it fine or maybe good if AIs defeat us? They have rights too.

  • Maybe AIs should have rights; if so, it would be nice if we could reach some “compromise” way of coexisting that respects those rights.
  • But if they’re able to defeat us entirely, that isn’t what I’d plan on getting – instead I’d expect (by default) a world run entirely according to whatever goals AIs happen to have.
  • These goals might have essentially nothing to do with anything humans value, and could be actively counter to it – e.g., placing zero value on beauty and having zero attempts to prevent or avoid suffering).

Zero attempts to prevent suffering? Aren’t you mistaking AIs for humanity? Humanity is the cause of most of the world’s unnecessary suffering, both human and not. (And if you start tossing around the numbers of wild animals, you are proving my point by missing the point.)

Setting aside our inherent tribal loyalties to humanity and our bias for continued existence, it is entirely likely that AIs defeating humanity would be an improvement. Probably a huge improvement.

5

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since:

I understand expected values, but think about what these longtermist calculations say: a tiny chance of lowering existential risk (a vanishingly small probability of improving the likelihood that quadzillions of happy robots will take over the universe) is more important than, say, stopping something like the Holocaust. Seriously.  If a longtermist was alive in 1938 and knew what was going on in Nazi Germany, they would turn down the opportunity to influence public opinion and policy: “An asteroid might hit Earth someday. The numbers prove we must focus on that.”


I think a longtermist in 1938 may well have come to the conclusion that failing to oppose the Holocaust (and Nazism more broadly) would also be bad from a longtermist perspective. This is because it would increase the likelihood of a long-term totalitarian state that isn't interested in improving the overall welfare of sentient beings. 

I find it hard to believe this is a position in good faith. Do you think we should kill all humans e.g. by engineering deadly viruses and releasing them into every population center in the world?

Honestly, this is why I won't be engaging with comments. 

How is this a question based on anything I've written? I'm arguing that we should reduce unnecessary suffering that exists right now. So instead of addressing that, you accuse me of advocating of wanting to kill all humans?

Good faith, indeed. Yikes. 

Anyone with legit questions and insights (as I said, I could be wrong!) knows where to find me.

Over and out.

This is how I understand your argument.

P1: Humans are really bad for all other sentient beings. P2: AI can defeat humans. P3: AI would be better for other sentient beings than humans. C: It would be good for sentient beings if AI defeated humans.

I'm asking why "AI" is unique to this argument and why you couldn't replace "AI" with any other method that kills all humans but leaves other sentient beings alive, e.g. "engineered virus". I could be crazy but I genuinely don't see what part of your argument precludes that.

Edit: I should have made it clear in my original comment that this is a question about the titular claim and the final section, and not about the parts in between which I read as arguments for near termism that I completely agree with.

Hey Matt, I thought this was really interesting. I think the mistake people make is seeing saving lives as a very isolated (subjective) good, which is at odds with collective goods (zero-sum thinking), when actually saving lives has a load of other cumulative benefits also. For instance lowering infant mortality lowers the birth rate, mitigating overpopulation, not having family members die is better for people's mental health and productivity, and not being starving also means people can reach their potential, contributing more to the world as a whole.

Those people who don't die, and aren't starving might very well be those that solve the asteroid crisis. I've always been a Karl Popper fan for this very reason, that quantifiable goods in the here and now don't necessarily contradict distant unquantifiable goods, in fact, as you lock in piecemeal rights and well-being factors for people, that layer of security can be built upon, making future benefits more likely.

Thank you for bringing up some of these important issues that are hard to talk about, Matt.

Curated and popular this week
Relevant opportunities