Omnizoid

75Joined Dec 2021

Comments
30

Hi, I have a blog where I talk about lots of issues related to EA and utilitarianism -- here it is if anyone's interested.  https://benthams.substack.com/

Interesting.  I'm inclined to say a world where you save lives would decrease existential risks -- more time to focus on AI alignment and combat global warming when we there is less death.  But generally, I think we should be really, really worried when speculative moral philosophy tells us to ignore drowning children.   

Interesting.  I'm inclined to say a world where you save lives would decrease existential risks -- more time to focus on AI alignment and combat global warming when we there is less death.  But generally, I think we should be really, really worried when speculative moral philosophy tells us to ignore drowning children.   

That's unclear.  However, we often conclude that something is beneficial before we know the mechanism.  

That's true of your current intuitions but I care about what we would care about if we were fully rational and informed.  If there was bliss so good that it would be worth experiencing ten minutes of horrific torture for ten minutes of this bliss, it seems that creating this bliss for ungodly numbers of sentient beings is quite an important ethical priority.  

Yeah, to some degree I have egalitarian intuitions pre reflection and some other small non utilitarian intuitions.  

I don't have the intuition that scientific discoveries are valuable independent of their use for sentient beings.  

When I reflect about the nature of torture it seems obvious that it's very bad.  But I'm not sure how by the nature of reflection on the experience alone we can conclude that there's no amount of positive bliss that could ever outweigh it.  We literally can't conceive of how good transhuman bliss might be and any case of trying to add up trillions of positive minor experiences seems very sensitive to scope neglect.  

I get why that would appeal to a positive utilitarian but I'm not sure why that would be relevant to a negative utilitarians' view.  Also, we could make it so that this only applies to babies who died before turning two, so they don't have sophisticated preferences about a net positive QOL.  

Hmm, this may be just a completely different intuition about suffering versus well-being.  To me it seems obvious that an end of life pinprick for ungodly amounts of transhuman bliss would be worth it.  Even updating on the intuitions of negative utilitarians I still conclude that the amount of future transhuman bliss would outweigh the suffering of the future.  

Sidenote, I really enjoy your blog and have cited you a bunch in high school debate.  

Load More