www.jimbuhler.site
I think the claim that pessimistic longtermism is evolutionarily selected for, because it would cause people to care more about their own families and kin than about far-off generations
Wait sorry, what? No, it would cause people to work on making the future smaller or reduce s-risks or something. Pessimistic longtermists are still longtermists. They do care about far-off generations. They just think it's ideally better if they don't exist.[1]
Having clarified that, do you really not find optimistic longtermism more evolutionarily adaptive than pessimistic longtermism? (Let's forget about agnosticism, here, for simplicity). I mean, the former says "save humanity and increase population size" and the latter says the exact opposite. I find it hard not to think the former favors survival and reproduction more than the latter, all else equal, such that it is more likely to be selected for.
Is it just that we had different definitions of pessimistic longtermism in mind? (I should have been clearer, sorry.)
And btw, this is not necessarily due to them making different moral assumptions than optimistic longtermists. The disagreement might be purely empirical.
I'm not sure why you think non-longtermist beliefs are irrelevant.
Nice. That's what makes us misunderstand each other, I think. (This is crucial to my point.)
Many people have no beliefs about what actions are good or bad for the long-term future (they are clueless or just don't care anyway). But some people have beliefs about this, most of whom believe X-risk reduction is good in the very long run. The most fundamental question I raise is Where do the beliefs of the latter type of people come from? Why do they hold them instead of holding that x-risk reduction is bad in the very long run or being agnostic on this particular question? [1] Is it because X-risk reduction is in fact good in the long term (i.e., these people have the capacity to make judgment calls that track the truth on this question) or because of something else?
And then my post considers the potential evolutionary pressure towards optimism vis-a-vis the long-term future of humanity as a candidate for "something else".
So I'm not saying optimistic longtermism is more evolutionary-debunkable than, e.g., partial altruism towards your loved ones. I'm saying it is more evolutionary-debunkable than not optimistic longtermism (i.e., pessimistic longtermism OR agnosticism on how to feel about the long-term future of humanity). Actually I'm not even really saying that, but I think that and this is why I chose to discuss an EDA against optimistic longtermism, specifically.
So if you want to disagree with me, you have to argue that:
A) Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism, and/or
B) Optimistic longtermism is better explained by the possibility that our judgment calls vis-a-vis the long-term value of X-risk reduction track the truth than by something else.
Does that make sense?
So I'm interested in optimistic longtermism vs not optimistic longtermsm (i.e., pessimictic longtermism OR agnosticism on the long-term value of x-risk reduction). Beliefs that the long-term future doesn't matter or something are irrevelant, here.
Oh interesting.
> I don't think there's any neutral way to establish whose starting points are more intrinsically credible.
So do I have any good reason to favor my starting points (/judgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?
Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth?
(This, on its own, doesn't prove anything about whether EDAs can ever help us; I'm just trying to pin down which assumption I'm making that you don't or vice versa).
Re (1): I mean, say we know the reason why Alice is a pro-natalist is 100% due to the mere fact that this belief was evolutionarily advantageous for her ancestors (and 0% due to good philosophical reasoning). This would discredit her belief, right? This wouldn't mean pro-natalism is incorrect. It would just mean that if it is correct, it is for reasons that have nothing to do with what led Alice to endorse it. She just happened to luckily be "right for the wrong reasons". Do you at least agree with this in this particular contrived example or do you think that evolutionary pressures cannot ever be a reason to question our beliefs?
(Waiting for your answer on this before potentially responding to the rest as I think this will help us pin down the crux.)
Do you happen to be aware of any?