seanrson

336Joined Jun 2020

Bio

"Do not avoid suffering or close your eyes before suffering. Do not lose awareness of the existence of suffering in the life of the world. Find ways to be with those who are suffering, including personal contact, visits, images and sounds. By such means, awaken yourself and others to the reality of suffering in the world." - Thích Nhất Hạnh

Comments
40

Oh totally (and you probably know much more about this than me). I guess the key thing I'm challenging is the idea that there was something like a very fast transfer of power resulting just from upgraded computing power moving from chimp-ancestor brain -> human brain (a natural FOOM), which the discussion sometimes suggests. My understanding is that it's more like the new adaptations allowed for cumulative cultural change, which allowed for more power.

Psychology/anthropology:

The misleading human-chimp analogy: AI will stand in relation to us the same way we stand in relation to chimps. I think this analogy basically ignores how humans have actually developed knowledge and power--not by rapid individual brain changes, but by slow, cumulative cultural changes. In turn, the analogy may lead us to make incorrect predictions about AI scenarios.

In addition to (farmed and wild) animal organizations, OPIS is worth checking out.

Here's a list of organizations focusing on the quality of the long-term future (including the level of suffering), from this post:
 

If you are persuaded by the arguments that the expected value of human expansion is not highly positive or that we should prioritize the quality of the long-term future, promising approaches include research, field-building, and community-building, such as at the Center on Long-Term Risk, Center for Reducing Suffering, Future of Humanity Institute, Global Catastrophic Risk Institute, Legal Priorities Project, and Open Philanthropy, and Sentience Institute, as well as working at other AI safety and EA organizations with an eye towards ensuring that, if we survive, the universe is better for it. Some of this work has substantial room for more funding, and related jobs can be found at these organizations’ websites and on the 80,000 Hours job board.

I found this to be a comprehensive critique of some of the EA community's theoretical tendencies (over-reliance on formalisms, false precision, and excessive faith in aggregation). +1 to Michael Townsend's suggestions, especially adding a TLDR to this post.

Longtermism + EA might include organizations primarily focused on the quality of the long-term future rather than its existence and scope (e.g., CLR, CRS, Sentience Institute), although the notion of existential risk construed broadly is a bit murky and potentially includes these (depending on how much of the reduction in quality threatens “humanity’s potential”)

Cool diagram! I would suggest rephrasing the Longtermism description to say “We should focus directly on future generations.” As it is, it implies that people only work on animal welfare and global poverty because of moral positions, rather than concerns about tractability, etc.

Glad to have you here :D

I'm just going to plug some recommendations for suffering-focused stuff: You can connect with other negative utilitarians and suffering-focused people in this Facebook group, check out this career advice, and explore issues in ethics and cause prioritization here.

Julia Wise (who commented earlier) runs the EA Peer Support Facebook group, which could be good to join, and there are many other EA and negative utilitarian/suffering-focused community groups. Feel free to PM me!

Also spent hens are almost always sold to be slaughtered, where many are probably exposed to torture-level suffering. I remember looking into this a while back and only found one pasture farm where spent hens were not sold for slaughter. You can find details for many farms here: https://www.cornucopia.org/scorecard/eggs/

I think considerations like these are important to challenge the recent emphasis on grounding x-risk (really, extinction risk) in near-term rather than long-term concerns. That perspective seems to assume that the EV of human expansion is pretty much settled, so we don’t have to engage too deeply with more fundamental issues in prioritization, and we can instead just focus on marketing.

I’d like to see more written directly comparing the tractability and neglectedness of population risk reduction and quality risk reduction. I wonder if you’ve perhaps overstated things in claiming that a lower EV for human expansion suggests shifting resources to long-term quality risks rather than, say, factory farming. It seems like this claim requires a more detailed comparison between possible interventions.

Load More