All of Ahrenbach's Comments + Replies

Direct work vs. earn-to-give

Hi,

I don’t have a solid answer to your question, but I do have a suggestion for a way to distinguish between the impactful health tech companies. If it is a health tech company that produces technology that primarily benefits wealthy western people, the direct impact probably does not outweigh the good that could be done by earning more and donating to groups that directly benefit the global poor along health dimensions.

It sounds like you might have a comparative advantage in health tech though that would enable you to do a lot of good working for a health tech organization that produces technology that benefits the global poor. Just a distinction to consider.

1warrenjordan1yYeah, that's another option I can build career capital for. Most of the health tech jobs in the market is towards US healthcare problems, which is what my experience is in. A goal, one day, would be to work for or start a global health non-profit leveraging technology to scale a validated health intervention (e.g tech-enabled AMF). But to build that career capital, I don't think I need to work in healthcare my entire career. It'd probably be better to build that capital across a breadth of industries. That's an assumption though and need to network with others to confirm that.
Does Utilitarian Longtermism Imply Directed Panspermia?

Yes I think I messed up the Parfit style argument here. Perhaps the only relevant cases are A, B, and D, because I’m supposing we fail to reach the Long Reflection and asking what the best history line is on utility Longtermist grounds.

If we conclude from this that a biotic hedge is justified on those grounds, then the question would be what is its priority relative to directly preventing x risks, as edcon said.

Does Utilitarian Longtermism Imply Directed Panspermia?

Great job identifying some relevant uncertainties to investigate. I will think about that some more.

My goal here is not so much to resolve the question of “should we prepare a biotic hedge?” but rather “Does utilitarian Longtermism imply that we should prepare it now, and if faced with a certain threshold of confidence that existential catastrophe is imminent, deploy it?” So I am comfortable not addressing the moral uncertainty arguments against the idea for now. If I become confident that utilitarian Longtermism does imply that we should, I would examine

... (read more)
3Denis Drescher1yOh yeah, I was also talking about it only from utilitarian perspectives. (Except for one aside, “Others again refuse it on deontological or lexical grounds that I also empathize with.”) Just utilitarianism doesn’t make a prescription as to the exchange rate of intensity/energy expenditure/… of individual positive experiences to individual negative experiences. Yes, I hope they do. :-) Sorry for responding so briefly! I’m falling behind on some reading.
Does Utilitarian Longtermism Imply Directed Panspermia?

Thanks for the thoughtful response! I think you do a good job identifying the downsides of directed panspermia. However, in my description of the problem, I want to draw your attention to two claims drawn from Ord’s broader argument.

First, the premise that there is roughly 1/6 probability humanity does not successfully navigate through The Precipice and reach the Long Reflection. Second, the fact that for all we know we might be the universe’s only chance at intelligently flourishing.

My question is whether there is an implication here that directed panspe

... (read more)
1MichaelA1yI'd definitely much prefer that approach to just aiming for actually implementing directed panspermia ASAP. Though I'm still very unsure whether directed panspermia would even be good in expectation, and doubt it should be near the top of a longtermist's list of priorities, for reasons given in my main answer. I just wanted to highlight that passage because I think that this relates to a general category of (or approach to) x-risk intervention which I think we might call "Developing, but not deploying, drastic backup plans", or just "Drastic Plan Bs". (Or, to be nerdier, "Preparing saving throws".) I noticed that as a general category of intervention when reading endnote 92 in Chapter 4 of the Precipice: I'd be interested in someone naming this general approach, exploring the general pros and cons of this approach, and exploring examples of this approach.
3Denis Drescher1yI think I’m not well placed to answer that at this point and would rather defer that to someone who has thought about this more than I have from the vantage points of many ethical theories rather than just from my (or their) own. (I try, but this issue has never been a priority for me.) Then again this is a good exercise for me in moral perspective-taking or what it’s called. ^^ In the previous reply I tried to give broadly applicable reasons to be careful about it, but those were mostly just from Precipice. The reason is that if I ask myself, e.g., how long I would be willing to endure extreme torture to gain ten years of ultimate bliss (apparently a popular thought experiment), I might be ready to invest a few seconds if any, for a tradeoff ratio of 1e7 or 1e8 to 1. So from my vantage point, the r-strategist style “procreation” is very disvaluable. It seems like it may well be disvaluable in expectation, but either way, it seems like an enormous cost to bear for a highly uncertain payoff. I’m much more comfortable with careful, K-strategist “procreation” on a species level. (Magnus Vinding [https://magnusvinding.com/home/] has a great book coming out soon that covers this problem in detail.) But assuming the agnostic position again, for practice, I suppose A and C are clear cut: C is overwhelmingly good (assuming the Long Reflection works out well and we successfully maximize what we really terminally care about, but I suppose that’s your assumption) and A is sort of clear because we know roughly (though not very viscerally) how much disvalue our ancestors have paid forward over the past millions of years so that we can hopefully eventually create a utopia. But B is wide open. It may go much more negative than A even considering all our past generations – suffering risks, dystopian-totalitarian lock-ins, permanent prehistoric lock-ins, etc. The less certain it is, the more of this disvalue we’d have to pay forward to get one utopia out of it. And it may also go
4edcon1yI think that the empirically the effort to prepare the biotic hedge, is likely to be be expensive in terms of resources and influence, as I suspect a lot of people would be strongly averse to directed panspernia, as it would be likely negative in some forms of negative utilitarianism, and other value systems. So it would be better for longterm future to reduce existential risk specifically. I think SETI type searches are different, as you have to consider negative effects from contact to cuurent civilisation. Nice piece from paul christano https://sideways-view.com/2018/03/23/on-seti/ [https://sideways-view.com/2018/03/23/on-seti/]
Does Utilitarian Longtermism Imply Directed Panspermia?

Based on NASA’s extensive planetary protection efforts to prevent interplanetary contamination of the explored worlds, I think it is plausible now. https://en.m.wikipedia.org/wiki/Planetary_protection

1edcon1yTake the scenario where there was a directed panspermia mission towards europa that containing a range of organisms up to the complexity, a simple fish and a range of species to make a self sustaining ecosystem that have been picked to be adapted to the enviroment they are going to and they successfully colonise . Would have to consider probabilities of where great filter is. If great filter is before this level of complexity then panspernia would be good, if think that on balance the whole space of possible civilisations are on net positive. However in a case that the great filter is after this for example from great ape level intelligence to humans requires very specific evolutionary incentives, and it is unlikely to get past. Then could have a very high chance of something similar in value to the 'wild' animal population and a low probability of human level civilisation. If place the value of human level of civilisations as many orders of magnitude better than the (possible) negative welfare then the argument could go through, as being positive EV even if placed lowed probability of going from small fish ecosystem to human level civilisation.