CF

Cody_Fenwick

351 karmaJoined

Comments
19

Yeah that all seems plausible to me! I think your argument here should successfully deflate a lot of the motivation deontologists should have in advocating against consequentialism, especially if they concede (which many seem to) that consequentialists don't tend to act like naive consequentialists.

Of course, some philosophers may just like talking about which theory they think is true, regardless of whether their theory would imply that they should do that. :)

Thanks for sharing! This seems like a really interesting and strong argument, and I think this perspective on deontology has been under-appreciated.

But I think maybe you push the practical implications further than the arguments justify. For example, you say:

>Given how publicly hostile to consequentialism many deontologists are, this result is big news that should change their attitudes and behavior. Even if they are personally constrained against lying for the greater good, they should at least be happy to see sincere consequentialists winning out in the marketplace of ideas. Depending on the details of their view, it may even be wrong for them to interfere by discouraging consequentialist thought (and action) in others. 

But I don't think this implication really follows from your argument (as I understand it), because your argument depends on heavily stylized examples where all the crucial factors are stipulated.

As you say in a footnote:

>on utilitarian grounds, we should generally want people to be disposed to respect rights (and not easily override this disposition since their naive “calculations” are unreliable). But since this reason is merely instrumental, we should of course prefer the better outcome in any situation where it is stipulated that overriding this disposition would actually turn out for the best.

But the quiet deontologist could believe:

1. They have strong reasons to avoid committing rights violations, while finding it preferable that others violate rights when that would lead to better outcomes overall.
2. It makes sense to publicly advocate against consequentialism, because the cases in which violating rights actually leads to better outcomes overall are quite rare and unlikely to be decision-relevant — something the utilitarians often admit!

So this helps make the quiet deontologist's public advocacy for their view more explicable and sensible. You might think this then puts the quiet deontologist in a bizarre position where the reason they advocate for a view and the reason they hold it sharply diverge. But I think they'd say that advocating for the true view of what individual reasons each person has will actually make things go better overall is consistent and a sufficient justification for advocacy of deontology. 

Though it's possible I'm missing something here — curious what you think!

At 80,000 Hours, we published an article on this topic in 2023 by Benjamin Todd. It's a follow-up to Toby Ord's original work, and looks at other datasets and cause areas.

Benjamin concluded:

Overall, I think it’s defensible to say that the best of all interventions in an area are about 10 times more effective than the mean, and perhaps as much as 100 times.

And also:

People in effective altruism sometimes say things like “the best charities achieve 10,000 times more than the worst” — suggesting it might be possible to have 10,000 times as much impact if we only focus on the best interventions — often citing the DCP2 data as evidence for that.

This is true in the sense that the differences across all cause areas can be that large. But it would be misleading if someone was talking about a specific cause area in two important ways.

There's a ton more detail in the article.

I don't think this is an accurate summary of Dario's stated views. Here's what he said in 2023 on the Dwarkesh podcast:

Dwarkesh Patel (00:27:49 - 00:27:56):

When you add all this together, what does your estimate of when we get something kind of human level look like?

Dario Amodei (00:27:56 - 00:29:32):

It depends on the thresholds. In terms of someone looks at the model and even if you talk to it for an hour or so, it's basically like a generally well educated human, that could be not very far away at all. I think that could happen in two or three years.

Here's what he said in a statement in February:

Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring.

These are different ideas, so I think it would be reasonable to have different timelines for "if you talk to it for an hour or so, it's basically like a generally well educated human" and "a country of geniuses in a datacenter." Nevertheless there's substantial overlap with these timelines, 18 months apart; he also uses language that signals some uncertainty at both points. I don't think this is particularly suspicious — it seems pretty consistent to me.

Thanks for sharing this fun paper!

I think I disagree with several key parts of the argument.

4. Professional philosophers are among the most educated and skeptical people on the planet. Yet,  according to the 2020 PhilPapers survey, 18.83% of them accept or lean toward theism (too low due  to selection effects?). 7.21% were agnostic. If we play it safe and suppose that only a third of the theist  philosophers believe in hell, that’s about 6%. Thus, (on a very conservative estimate) about 6% of the most skeptical people on the planet believe in hell.

I think this makes a pretty important error in reasoning. Grant that philosophers in general are among the most skeptical people on the planet. Then you select a 6% segment of them. The generalization that these are still among the most skeptical people on the planet is erroneous. This 6% could have of (e.g.) average levels of skepticism, and it's the rest of the group that brings up the average level of skepticism of the group.

Here’s Jesus:

“When the Son of Man comes into his glory, and all the angels with him, then he will sit on his  glorious throne. Before him will be gathered all the nations and he will separate people one from another as a shepherd separates the sheep from the goats. And he will place the sheep on his right,  but the goats on the left. Then the King will say to those on his right, “Come you who are blessed  by my Father, inherit the kingdom prepared for you from the foundation of the world. For I was  hungry and you gave me food…

Then he will say to those on his left [the goats], Depart from me you cursed, into the eternal fire prepared  for the devil and his angels. For I was hungry and you gave me no food…Truly, I say to you, as you  did not do it to one of the least of these you did not do it to me. And these will go away into eternal  punishment, but the righteous into eternal life.” (Matt 25:31-46)

This is among the passages commonly interpreted as Jesus discussing hell. However, note that it doesn't actually show Jesus discussing hell as we've been thought to think of it. First, he's clearly speaking in metaphor — he's not talking about literal sheep and goats. It's not clear what the "eternal punishment" he's referring to is. Some people interpret this as more of a "final" punishment, e.g. death, rather than eternal suffering. And indeed, if Jesus were referring to hell as traditionally conceived, I'd expect him to be clearer about this. 

Many scholars on the topic have written extensively about this. My understanding is that there's little solid basis for getting the traditionally understood concept of hell out of the core ancient sources. And I'd expect, if it were true, and Jesus really were communicating about something as important as hell with divine knowledge, there would be no ambiguity about it. (Since the Quran comes after and is influenced by Christian sources, I don't think we should read it as a separate source of evidence.) 

I think this is a very strong reason to doubt the plausibility of hell. And there are many other such reasons:

  1. Generally there's little reason to think ancient texts are strong sources of truth on questions of cosmological significance.
  2. These kinds of extravagant claims are completely discordant with our ordinary experience of the world.
  3. These kinds of claims pattern match to the kinds of stories people might make up in order to control others.
  4. There are very plausible error theories about why people believe religious claims like these.
  5. There are many religious believers who reject these particular claims about hell even while being sympathetic to other religious claims.

The weight of these considerations drives the plausibility of hell extremely low, much lower in my view than the possibility of x-risk from risks like nuclear weapons, pandemics, AI, or even natural sources like asteroids (which, unlike hell, we know exist and have previously impacted the lives of species).

I think this does make the odds of a religious catastrophe pascalian, and worth rejecting on that basis.

Even if the risk weren't pascalian, I think there's another problem with this argument, with reference to this part of the argument:

Each religion has infinite stakes, so the expected (dis)value of each is equal.

  • Suppose I offer you one of two lottery tickets with the same payoff:

Ticket 1: Provides a 1/10,000 probability of infinite bliss, or

Ticket 2: Provides a 1/3 probability of infinite bliss.

  • The expected value of selecting each ticket is infinite (therefore, equal). Are you indifferent? No.
    • Lesson: When payoffs are equal, choose the most probable option.
  • EAs already do this with catastrophic risks. They prioritize based on probabilities.
  • Practical Upshot: Devote resources to religions in proportion to probabilities. Most resources to  most probably religion, second-most resources to second-most probable religion, etc.

The problem here is that if you advocate for the wrong religion, you might increase the chance people go to hell, because some religions think believing in another religion would make you go to hell. So actions on this basis have to grapple with the possibilities of infinite bliss and infinite suffering, and we often might have just as much reason to think we're increasing one or decreasing the other. And since there's no reliable method for coming to consensus on these kinds of religious questions, we should think a problem like "reduce the probability people will go to hell" — even if the risk level wasn't pascalian — is entirely intractable.

What a belief implies about what someone does depends on many other things, like other beliefs and their options in the world. If, e.g., there are more opportunities to work on x-risk reduction than s-risk reduction, then it might be true that optimistic longtermists are less likely than pessimsitic longtermists to form families (because they're more focused on work) than pessimistic longtermists.

Having clarified that, do you really not find optimistic longtermism more evolutionarily adaptive than pessimistic longtermism?

As my answer made clear, the point I really want to emphasise is that this feels like an absurd exercise — there's no reason to believe that longtermist beliefs are heritable or selected for in our ancestral environment. 

Yes, I do think this: "Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism."

That's what I think our prior should be, and generally we shouldn't accept evolutionary debunking arguments for moral beliefs unless there's actual findings in evolutionary psychology that suggest evolutionary pressure is the best explanation for them. I think it's indeed trivially easy to come up with some story for why any given belief is subject to evolutionary debunking, but these stories are so easy to come up with that they provide essentially no meaningful evidence that the debunking is warranted, unless further substantiated. 

E.g., I think the claim that pessimistic longtermism is evolutionarily selected for, because it would cause people to care more about their own families and kin than about far-off generations, is at least as plausible as your claim about optimistic longtermism. Or we might think agnostic longtermism is selected for, because we're cognitive misers and thinking about the long-term future is too intensive and not decision relevant to be selected for. In fact, I think none of these claims is very plausible at all, because I don't think it's likely evolution is selecting for these kinds of beliefs at this level of detail.

My argument about neutrality toward creating lives also counts against your claim, because if it were true that there was evolutionary pressure toward pro-natalist, optimistic longtermism, I would predict we'd not see intuitions for neutrality about creating future lives be so prevalent. But they are prevalent, so this is another reason I don't think your claim is plausible.

I mean... it's quite easy. There were people who, for some reason, were optimistic regarding the long-term future of humanity and they had more children than others (and maybe a stronger survival drive), all else equal. The claim that there exists such a selection effect seems trivially true.

I agree that you can construct hypothetical scenarios in which a given trait is selected for (though even then you have to postulate that it's heritable, which you didn't specify here). But your claim is is not trivially true, and it does not establish that optimism regarding the long-term future of humanity has in fact been selected for in human evolutionary history. Other beliefs that are more plausibly susceptible to evolutionary debunking include the idea that we have special obligations to our family members, since these are likely connected to kinship ties that have been widely studied across many species.

So I think a key crux between us is on the question: what does it take for a belief to be vulnerable to evolutionary debunking? My view is that it should actually be established in the field of evolutionary psychology that the belief is best explained as the direct[1] product of our evolutionary history. (Even then, as I think you agree, that doesn't falsify the belief, but it gives us reason to be suspicious of it.)

I asked ChatGPT how evolutionary psychologists typically try to show that a psychological trait was selected for. Here was its answer:

Evolutionary psychologists aim to show that a psychological trait is a product of selection by demonstrating that it likely solved adaptive problems in our ancestral environment. They look for traits that are universal across cultures, appear reliably during development, and show efficiency and specificity in addressing evolutionary challenges. Evidence from comparative studies with other species, heritability data, and cost-benefit analyses related to reproductive success also support such claims. Altogether, these approaches help build a case that the trait was shaped by natural or sexual selection rather than by learning or cultural influence alone.

I think you might say that you don't have to show that a belief is best explain by evolutionary pressure, just that there's some selection for it. In fact, I don't think you've done that (because e.g. you have to show that it's heritable). But I think that's not nearly enough, because "some evolutionary pressure toward belief X" is a claim we can likely make about any belief at all. (E.g., pessimism about the future can be very valuable, because it can make you aware of potential dangers that optimists would miss.)

Also, in response to  this:

On person-affecting beliefs: The vast majority of people holding these are not longtermists to begin with. What we should be wondering is "to the extent that we have intuitions about what is best for the long-term (and care about this), where do these intuitions come from?". Non-longtermist beliefs are irrelevant, here. Hopefully, this also addresses your last bullet point.

I'm not sure why you think non-longtermist beliefs are irrelevant. Your claim is that optimistic longtermist beliefs are vulnerable to evolutionary debunking. But that would only be true if they were plausibly a product of evolutionary pressures which should apply to populations that have been subject to evolutionary selection; otherwise they're not a product of our evolutionary history. And so evidence of what humans generally are prone to believe seems highly relevant. The fact that many people, perhaps most, are pre-theoretically disposed toward views that push away from optimistic longtermism and pro-natalism casts further doubt on the claim that the intuitions that push people toward optimistic longtermism and pro-natalism have been selected for.

  1. ^

    I used "direct" here because, in some sense, all of our beliefs are the product of our evolutionary history.

I don't think it's plausible that optimistic longtermism is vulnerable to evolutionary debunking, because:

  • I've seen people reason themselves into and out of pro-natalist and anti-natalist stances, often using mathematical reasoning. I haven't seen any reason to believe that the pro-natalists' reasoning in particular is succumbing to evolutionary pressure.
  • You can tell a story about pro-natalist beliefs having evolutionary advantages, of course, but that's not actually establishing a fact of evolutionary psychology. There are many such stories that sound plausible, and they can often be contradictory.
  • Person-affecting beliefs, and neutrality about creating positive lives, often reflect deeply held intuitions shared by many people that are hard to square with the idea that there's strong evolutionary pressure toward intuitive pro-natalism. Indeed, my experience in philosophy is that these views are treated as the intuitive view that need to be defended from the un-intuitive arguments for longtermism.
  • I think in fact it's more plausible to think that evolution selected for people who tend to have sex (that happens to be procreative) and want to care for children, than that it selects for having intuitions that people rely on when they reason impartially about population ethics.

I think if you were to turn this into an academic paper, I'd be interested to see if you could defend the claim that pro-natalist beliefs have been selected for in human evolutionary history.

Hi Rebecca,

Thanks for the question!

We did consider this as an option, and it's possible there are some versions of this we could do in the future, but it's not part of next steps at the moment. The basic reason is that this new strategic approach is the continuation of the direction 80k has been going for many years, so there’s not a segment of 80k with a separate focus to “spin-off” into a new entity. 

Load more