A

Ahrenbach

12 karmaJoined Apr 2020

Comments
8

Am I to understand that the standard longtermist reply is to bite the bullet here?

I wonder if we can few shot our way there by fine-tuning on Kagan’s “Death Course”, Parfit, and David Lewis. Edit: also the SEP?

Thanks. I might put together a response. Time to quadruple down on Parfit and argue from the vast multitudes of potential sentient evolved beings denied existence through inaction.

Hi,

I don’t have a solid answer to your question, but I do have a suggestion for a way to distinguish between the impactful health tech companies. If it is a health tech company that produces technology that primarily benefits wealthy western people, the direct impact probably does not outweigh the good that could be done by earning more and donating to groups that directly benefit the global poor along health dimensions.

It sounds like you might have a comparative advantage in health tech though that would enable you to do a lot of good working for a health tech organization that produces technology that benefits the global poor. Just a distinction to consider.

Yes I think I messed up the Parfit style argument here. Perhaps the only relevant cases are A, B, and D, because I’m supposing we fail to reach the Long Reflection and asking what the best history line is on utility Longtermist grounds.

If we conclude from this that a biotic hedge is justified on those grounds, then the question would be what is its priority relative to directly preventing x risks, as edcon said.

Great job identifying some relevant uncertainties to investigate. I will think about that some more.

My goal here is not so much to resolve the question of “should we prepare a biotic hedge?” but rather “Does utilitarian Longtermism imply that we should prepare it now, and if faced with a certain threshold of confidence that existential catastrophe is imminent, deploy it?” So I am comfortable not addressing the moral uncertainty arguments against the idea for now. If I become confident that utilitarian Longtermism does imply that we should, I would examine how other normative theories might come down on the question.

Me: “A neglected case above is where weapon X destroys life on earth, earth engages in directed panspermia, but there was already life in the universe unbeknownst to earth. I think we agree that B is superior to this case, and therefore the difference between B and A is greater. The question is does the difference between this case and C surpass that between A and B. Call it D. Is D so much worse than C that a preferred loss is from B to A? I don’t think so.”

You: “Hmm, I don’t quite follow… Does the above change the relative order of preference for you, and if so, to which order?”

No it would not change the relative order of A B C. The total order (including D) for me would be C > B > D > A, where |v(B) - v(A)| > |v(C) - v(D)|.

I was trying to make a Parfit style argument that A is so very bad that spending significant resources now to hedge against it is justified. Given that we fail to reach the Long Reflection, it is vastly preferable that we engage in a biotic hedge. I did a bad job of laying it out, and it seems that reasonable people think the outcome of B might actually be worse than A, based on your response.

Thanks for the thoughtful response! I think you do a good job identifying the downsides of directed panspermia. However, in my description of the problem, I want to draw your attention to two claims drawn from Ord’s broader argument.

First, the premise that there is roughly 1/6 probability humanity does not successfully navigate through The Precipice and reach the Long Reflection. Second, the fact that for all we know we might be the universe’s only chance at intelligently flourishing.

My question is whether there is an implication here that directed panspermia is a warranted biotic hedge during The Precipice phase, perhaps prepared now and only acted on if existential catastrophe odds increase. If we make it to The Long Reflection, I’m in total agreement that we do not rapidly engage in directed panspermia. However, for the sake of increasing the universe’s chance of having some intelligent flourishing, perhaps a biotic hedge should at least be prepare now, to be executed when things look especially dire. But at what point would it be justified?

I think this reasoning is exactly the same as the utilitarian longtermist argument that we should invest more resources now addressing x risk, especially Parfit’s argument for the value of potential future persons.

Assume three cases: A. All life in the universe is ended because weapon X is deployed on earth. B. All life on earth is ended by weapon X but life is preserved in the universe because of earth’s directed panspermia. C. Earth originating life makes it through the Precipice and flourishes in the cosmic endowment for billions of years.

It seems C > B > A, with the difference between A and B greater than the difference between B and C.

A neglected case above is where weapon X destroys life on earth, earth engages in directed panspermia, but there was already life in the universe unbeknownst to earth. I think we agree that B is superior to this case, and therefore the difference between B and A is greater. The question is does the difference between this case and C surpass that between A and B. Call it D. Is D so much worse than C that a preferred loss is from B to A? I don’t think so.

So I guess the implied position would be that we should prepare a biotic hedge in case things get especially dire, and invest more in SETI type searches. If we know that life exists elsewhere in the universe, we do not need to deploy the biotic hedge?

Based on NASA’s extensive planetary protection efforts to prevent interplanetary contamination of the explored worlds, I think it is plausible now. https://en.m.wikipedia.org/wiki/Planetary_protection