I think you can travel to another country to donate eggs there. I think in general you get paid more in other countries if you are of certain demographics.
There's is a trap that consequentialists can easily fall into that the author describes beautifully in this post. I think the solution solution within consequentialism is to see that consequentialism doesn't recommend that we we only praise the highest achievers. Praise and blame are only justified within consequentialism when they produce good consequences, and it's beneficial to praise a wide variety of people, most especially people who are trying their hardest to improve the world.For a fuller spectrum account of what it is to live a moral life, you can add 'virtue consequentialism' to your consequentialism. This position is just the observation that within consequentialism, virtues can be defined as character traits that lead to good consequences, and it's useful to cultivate these.
I've been in the community since about 2011, and I've also noticed this happening in myself and quite a few others who have been in the community for a long time. I'm not aware of any data on the subject. Denise's explanation of this and this post sounds right to me.
I came to the hotel as I was finishing a contract for Rethink Prioritites, worked for them there for one month, then did indepenent research. Now I am employed at an EA org again, and I am paying cost price.
I agree that sentience, at least as we've defined it, is an all-or-nothing phenomenon (which is a common view in philosophy but not as common in neuroscience).
What do you think of the argument that there may be cases where it's unclear if the term is appropriate or not. So there would be a grey area where there is a "sort of" sentience. I've talked to some people who think that this grey area might be taxonomically large, including most invertebrates.
Yeah, I meant it to be synonymous with agent.
Do you mainly see these scenarios as likely because you don't think there is likely to be many beings in future worlds or because you think that the beings that exist in those future worlds are unlikely to be conscious?I had some thoughts about the second case. I've done some research on consciousness, but I still feel quite lost when it comes to this type of question.It definitely seems like some machine minds could be conscious (we are basically in existence proof of that), but I don't know how to think about if a specific architecture would be required. My intuition is that most intelligent architectures other than something like a lookup table would be conscious, but don't think that intuitions based on anything substantial.By the way, there is a strange hard sci-fi horror novel called Blindsight that basically "argues" that the future belongs to nonconscious minds and this scenario is likely.
Thanks!I personally would disagree that variety of experience is morally relevant. Obviously, most people enjoy variety of their own experiences, but that's already waded into the total hedonistic utilitarian equation because it makes us happier. So I don't think that we need to add it as a separate thing that has intrinsic moral value. Looking at diversity can also be aesthetically pleasing for us, but that gets waited in to the equation because it makes us happy, and so, again, I don't think we need to say it has intrinsic moral value. I don't think our aesthetic appreciation of biodiversity is a very significant source of happiness, though, compared to the well-being of the much larger number of animals involved.I think what you said makes sense given that moral position. I haven't heard a name for the position that diversity of experience is intrinsically morally significant, but I have a friend who I think argued for a similar position, and I'll ask him.
Animal Ethics has written about this. Here are some of our relevant posts on the subject. Hopefully they are helpful.https://www.animal-Ethics.org/sentience-section/relevance-of-sentience/why-we-should-consider-sentient-beings-rather-than-ecosystems/ https://www.animal-ethics.org/sentience-section/relevance-of-sentience/why-we-should-consider-individuals-rather-than-species/ https://www.animal-ethics.org/give-moral-consideration-sentient-beings-rather-living-beings/
Imagine you heard about alien civilization that was pivoted towards colonizing the stars. But most of these aliens had almost no moral recognition and some of them were raised in inhumane conditions to be killed for trivial reasons for the other aliens. If I heard about this situation, I would be pretty concerned about what the aliens would do when they started colonizing the stars. I wouldn't be rooting for them by trying to prevent existential risk instead of trying to improve their values. But of course, that's a description of our society. There are some additional details about our society that make me more hopeful about it, but it seems quite weird to say that improving our values in this way wouldn't be important.