Jim Buhler

Phil PhD candidate @ University of Santiago de Compostela
599 karmaJoined Working (0-5 years)Pursuing a doctoral degree (e.g. PhD)Paris, France
www.jimbuhler.site

Bio

Participation
4

www.jimbuhler.site

Sequences
2

On Cluelessness
What values will control the Future?

Comments
84

Topic contributions
4

Oh ok so our disagreement is on whether concern for the long-term future needs to be selected for for evolution to "directly" (in the same sense you used it earlier) influence longtermists' beliefs on the value of X-risk reduction and making the future bigger, right?

I think the claim that pessimistic longtermism is evolutionarily selected for, because it would cause people to care more about their own families and kin than about far-off generations

Wait sorry, what? No, it would cause people to work on making the future smaller or reduce s-risks or something. Pessimistic longtermists are still longtermists. They do care about far-off generations. They just think it's ideally better if they don't exist.[1]

Having clarified that, do you really not find optimistic longtermism more evolutionarily adaptive than pessimistic longtermism? (Let's forget about agnosticism, here, for simplicity). I mean, the former says "save humanity and increase population size" and the latter says the exact opposite. I find it hard not to think the former favors survival and reproduction more than the latter, all else equal, such that it is more likely to be selected for.

Is it just that we had different definitions of pessimistic longtermism in mind? (I should have been clearer, sorry.)

  1. ^

    And btw, this is not necessarily due to them making different moral assumptions than optimistic longtermists. The disagreement might be purely empirical.

I'm not sure why you think non-longtermist beliefs are irrelevant.

Nice. That's what makes us misunderstand each other, I think. (This is crucial to my point.)

Many people have no beliefs about what actions are good or bad for the long-term future (they are clueless or just don't care anyway). But some people have beliefs about this, most of whom believe X-risk reduction is good in the very long run. The most fundamental question I raise is Where do the beliefs of the latter type of people come from? Why do they hold them instead of holding that x-risk reduction is bad in the very long run or being agnostic on this particular question? [1] Is it because X-risk reduction is in fact good in the long term (i.e., these people have the capacity to make judgment calls that track the truth on this question) or because of something else? 

And then my post considers the potential evolutionary pressure towards optimism vis-a-vis the long-term future of humanity as a candidate for "something else". 

So I'm not saying optimistic longtermism is more evolutionary-debunkable than, e.g., partial altruism towards your loved ones. I'm saying it is more evolutionary-debunkable than not optimistic longtermism (i.e., pessimistic longtermism OR agnosticism on how to feel about the long-term future of humanity). Actually I'm not even really saying that, but I think that and this is why I chose to discuss an EDA against optimistic longtermism, specifically.

So if you want to disagree with me, you have to argue that:
A) Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism, and/or
B) Optimistic longtermism is better explained by the possibility that our judgment calls vis-a-vis the long-term value of X-risk reduction track the truth than by something else.

Does that make sense?

  1. ^

    So I'm interested in optimistic longtermism vs not optimistic longtermsm (i.e., pessimictic longtermism OR agnosticism on the long-term value of x-risk reduction). Beliefs that the long-term future doesn't matter or something are irrevelant, here.

Oh interesting.

> I don't think there's any neutral way to establish whose starting points are more intrinsically credible.

So do I have any good reason to favor my starting points (/judgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?

Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth? 

(This, on its own, doesn't prove anything about whether EDAs can ever help us; I'm just trying to pin down which assumption I'm making that you don't or vice versa).

Re (1): I mean, say we know the reason why Alice is a pro-natalist is 100% due to the mere fact that this belief was evolutionarily advantageous for her ancestors (and 0% due to good philosophical reasoning). This would discredit her belief, right? This wouldn't mean pro-natalism is incorrect. It would just mean that if it is correct, it is for reasons that have nothing to do with what led Alice to endorse it. She just happened to luckily be "right for the wrong reasons". Do you at least agree with this in this particular contrived example or do you think that evolutionary pressures cannot ever be a reason to question our beliefs?

(Waiting for your answer on this before potentially responding to the rest as I think this will help us pin down the crux.)

Ah nice, thanks for these points, Cody.

I'd be interested to see if you could defend the claim that pro-natalist beliefs have been selected for in human evolutionary history.

I mean... it's quite easy. There were people who, for some reason, were optimistic regarding the long-term future of humanity and they had more children than others (and maybe a stronger survival drive), all else equal. The claim that there exists such a selection effect seems trivially true. The real question is how strong it is relative to, e.g., a potential indirect selection toward truth-tracking longtermist beliefs. I.e., the EDA argument against optimistic longermism seems trivially valid. The question is how strong it is relative to other arguments. (And I'd really like for my potential paper to make progress on this, yeah!)

(Hopefully, the above also addresses your second bullet point.)

Now, you give potential reasons to believe the EDA is weak (thanks for that!): 

I've seen people reason themselves into and out of pro-natalist and anti-natalist stances, often using mathematical reasoning. I haven't seen any reason to believe that the pro-natalists' reasoning in particular is succumbing to evolutionary pressure.

You can't reason yourself into or out of something like optimistic longtermism just using math. You need to make so many subjective judgment calls. And because you can reason yourself out of a belief does not mean that there weren't evolutionary pressures toward this belief. This means that the evo pressure was at least not overwhelmingly strong, however, fair. But I don't think anyone was contesting that. You can say this about absolutely all evolutionary pressures on normative and empirical beliefs. I don't think there is any that is so strong that we can't reason ourselves out of it. But this doesn't mean they can't have suspicious origins. 

On person-affecting beliefs: The vast majority of people holding these are not longtermists to begin with. What we should be wondering is "to the extent that we have intuitions about what is best for the long-term (and care about this), where do these intuitions come from?". Non-longtermist beliefs are irrelevant, here. Hopefully, this also addresses your last bullet point.

Thanks for engaging with this, Richard!

To be clear: you're arguing that we should be agnostic (and, more strongly, take others to also be utterly clueless) about whether it would be good or bad for everyone to die?

I think I am making a much weaker claim than this. While I suggest that the EDA argument I raise is valid, I do not argue that it is strong to the point where optimistic longtermism is unwarranted. Also, the argument itself does not say what people should believe if they do not endorse optimistic longtermism (an alternative to cluelessness is pessimistic longtermism -- I do not say anything about which one is the most appropriate alternative to optimistic longtermism if the EDA argument is strong enough). Sorry if my writing was unclear.

whether it would be good or bad for everyone to die

Maybe a nitpick, but I find this choice of words quite unfair as it implicitly appeals to commonsensical intuitions that seem to have nothing to do with longtermism (to implicitly back your opinion that we know X-risk reduction is good from a longtermist perspective). You do something very similar multiple times in It's Not Wise to be Clueless.

If you think that, in general, justified belief is incompatible with "judgment calls"

I didn't say that. I said that we ought to wonder whether these judgment calls are reliable, claim which you seem to agree with when you write:

It's OK - indeed, essential - to make judgment calls, and we should simply try to exercise better rather than worse judgment.

Now, you seem much more convinced than me that our judgment calls with regard to the long-term value of X-risk reduction come from a reliable source (such as an evolutionary pressure selecting correct longtermist beliefs, whether directly or indirectly) rather than from evolutionary pressures towards pro-natalist beliefs. In It's Not Wise to be Clueless, the justification you provide for something in this vicinity[1] is that we ought to start with the prior that something like X-risk reduction is good for the similar reasons why we should start with the prior that the sun will rise tomorrow. But I think Jesse quite accurately pointed out the disanalogy and the problem with your argument in his comment. Do you have another argument and/or an objection to Jesse's reply that you are happy to share?

  1. ^

    EDIT: actually, not sure this is related. You don't seem to argue that our judgment calls are truth-tracking. You argue that there is a rational requirement to start with a certain prior (i.e., you implicitly suggest that all rational agents should agree with you on X-risk reduction without having to make judgment calls, in fact).

What do you think of the term "pro-natalist longtermism" instead of "optimistic longtermism"? I find the latter (EDIT: former) kinda... pejorative? It feels like an uncharitable framing for some reason, even though it's fairly accurate when you think about it. The reason why longtermists want humanity to remain and the future to be "bigger" is so more people/beings (which they expect to be happy in expectation) could exist.

Meanwhile, "optimistic longtermism" feels too charitable as the word "optimistic" puts a positive spin on it.

Let's imprecisely interpret judgment calls as "hard-to-explain intuitions" as you wrote, for simplicity. I think that's enough, here.

For the US 2024 presidential election, there are definitely such judgment calls involved. If one tries to make an evolutionary argument undermining our ability to predict US 2024 presidential election, P1 holds. P2 visibly doesn't however, at least for some good predictors. There is empirical evidence against P2. And presumably, the reason why P2 doesn't hold is that people who have decent hard-to-explain intuitions vis-a-vis "where the wind blows" in such socio-political contexts survived better. The same can't be said (at least, not obviously) for forecasting whether making altruistic people more longtermists does more good than harm, considering all the consequences on everything from now until the end of time.

> But I don't see why we would need to end up at 50%

Say you say 53% and Alice says 45%. The two of you can give me all the arguments you want. At the end of the day, you both undeniably made judgment calls when weighing the reasons to believe making altruistic people more longtermists does more good than harm, all things considered, and reasons to believe the opposite (including reasons, in both cases, that have to do with aliens, acausal reasoning, and how to deal with crucial unknown unknowns). I don't see why I should trust any of your two different judgment-cally "best guesses" any more than the other.

In fact, if I can't find a good objection to P2, I have no good reason to trust any of your best guesses any more than a dart-throwing chimp. If I had an opinion on the (dis)value of making altruistic people more longtermists without having a good reason to reject P2, I'd be blatantly inconsistent. [1]

Do you agree now that we've hopefully clarified what is a judgment call and what isn't, here? (I think P2 is definitely the crux for whether we should be clueless. Defending that we can identify positive longtermist causes without resorting to any sort of hard-to-explain intuitions seems really untenable. And I think there may be better objections to P2 than the ones I address in the post.)


[1] Btw, a bit tangential but a key popular assumption/finding in the literature on decision-making under deep uncertainty is that "not having an opinion" or "suspending judgment" =/= 50% credence -- see this post from DiGiovanni for a nice overview).

Load more