antimonyanthony

I'm Anthony DiGiovanni, a suffering-focused AI safety researcher. I (occasionally) write about altruism-relevant topics on my blog, Ataraxia.

Wiki Contributions

Comments

What would you do if you had half a million dollars?

This is easy for me to say as someone who agrees with these donation recommendations, but I find it disappointing that this comment apparently has gotten several downvotes. The comment calls attention to a neglected segment of longtermist causes, and briefly discusses what sorts of considerations would lead you to prioritize those causes. Seems like a useful contribution.

What would you do if you had half a million dollars?

Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option.

Note that this post (written by people who agree that reducing extinction risk is good) provides a critique of the option value argument.

A longtermist critique of “The expected value of extinction risk reduction is positive”

I'm not sure I get why "Singletons about non-life-maximizing values are also convergent", though.

Sorry, I wrote that point lazily because that whole list was supposed to be rather speculative. It should be "Singletons about non-life-maximizing values could also be convergent." I think that if some technologically advanced species doesn't go extinct, the same sorts of forces that allow some human institutions to persist for millennia (religions are the best example, I guess) combined with goal-preserving AIs would make the emergence of a singleton fairly likely - not very confident in this, though, and I think #2 is the weakest argument. Bostrom's "The Future of Human Evolution" touches on similar points.

A longtermist critique of “The expected value of extinction risk reduction is positive”

What I mean is closest to #1, except that B has some beings who only experience disvalue and that disvalue is arbitrarily large. Their lives are pure suffering. This is in a sense weaker than the procreation asymmetry, because someone could agree with the PDP but still think it's okay to create beings whose lives have a lot of disvalue as long as their lives also have a greater amount of value. Does that clarify? Maybe I should add rectangle diagrams. :)

A longtermist critique of “The expected value of extinction risk reduction is positive”

That sounds reasonable to me, and I'm also surprised I haven't seen that argument elsewhere. The most plausible counterarguments off the top of my head are: 1) Maybe evolution just can't produce beings with that strong of a proximal objective of life-maximization, so the emergence of values that aren't proximally about life-maximization (as with humans) is convergent. 2) Singletons about non-life-maximizing values are also convergent, perhaps because intelligence produces optimization power so it's easier for such values to gain sway even though they aren't life-maximizing. 3) Even if your conclusion is correct, this might not speak in favor of human space colonization anyway for the reason Michael St. Jules mentions in another comment, that more suffering would result from fighting those aliens.

People working on x-risks: what emotionally motivates you?

all the work done by other EAs in other causes would be for naught if we end up becoming extinct

I've seen this argument elsewhere, and still don't find it convincing. "All" seems hyperbolic. Much longtermist work to improve the quality of posthumans' lives does become irrelevant if there won't be any posthumans. But animal welfare, poverty reduction, mental health, and probably some other causes I'm forgetting will still have made an important (if admittedly smaller-scale) difference by relieving their beneficiaries' suffering.

Shallow evaluations of longtermist organizations

I think I learned a lot while I was there, and I think the other summer research fellows whose views I have a sense of felt the same

+1. I'd say that applying for and participating in their fellowship was probably the best career decision I've made so far. Maybe 60-70% of this was due to the benefits of entering a network of people whose altruistic efforts I greatly respect, the rest was the direct value of the fellowship itself. (I haven't thought a lot about this point, but on a gut level it seems like the right breakdown.)

Exploring a Logarithmic Tolerance of Suffering

Personally I still wouldn't consider it ethically acceptable to, say, create a being experiencing a -100-intensity torturous life provided that a life  with exp(100)-intensity happiness is also created. Even after trying strongly to account for possible scope neglect. Going from linear to log here doesn't seem to address the fundamental asymmetry. But I appreciate this post, and I suspect quite a few longtermists who don't find stronger suffering-focused views compelling would be sympathetic to a view like this one - and the implications for prioritizing s-risks versus extinction risks seem significant.

Spears & Budolfson, 'Repugnant conclusions'

But of course the A and Z populations are already impossible, because we already have present and past lives that aren't perfectly equal and aren't all worth living.  So-- even setting aside possible boundedness on the number of lives--the RC has always fundamentally been about comparing undeniably impossible  populations

I don't find this a compelling response to Guillaume's objection. There seems to be a philosophically relevant difference between physical impossibility of the populations, and metaphysical impossibility of the axiological objects. We study population ethics because we expect our decisions about the trajectory of the long-term future to approximate the decisions involved in these thought experiments. So the point is that NU would not prescribe actions with the general structure of "choose a future with arbitrarily many torturous lives and a sufficiently large number of slightly more happy than suffering lives [regardless of whether we call these positive utility lives], over a future with arbitrarily many perfectly happy lives," but these other axiologies would. (ETA: As Michael noted, there are other intuitively unpalatable actions that NU would prescribe too. But the whole message of this paper is that we need to distinguish between degrees of repugnance to make progress, and for some, the VRC is more repugnant than the conclusions of NU.)

How to PhD

You will find yourself justifying the stupidest shit on impact grounds, and/or pursuing projects which directly make the world worse.

Could you be a bit more specific about this point? This sounds very field-dependent.

Load More