373 karmaJoined Jun 2020


"Do not avoid suffering or close your eyes before suffering. Do not lose awareness of the existence of suffering in the life of the world. Find ways to be with those who are suffering, including personal contact, visits, images and sounds. By such means, awaken yourself and others to the reality of suffering in the world." - Thích Nhất Hạnh


Answer by seanrsonApr 06, 202392

Hey Jack! In support of your view, I think you'd like some of Magnus Vinding's writings on the topic. Like you, he expresses some skepticism about focusing on narrower long-term interventions like AI safety research (vs. broader interventions like improved institutions).

Against your view, you could check out these two (i, ii) articles from CLR.

Feel free to message me if you'd like more resources. I'd love to chat further :)

How about Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans?


Oh totally (and you probably know much more about this than me). I guess the key thing I'm challenging is the idea that there was something like a very fast transfer of power resulting just from upgraded computing power moving from chimp-ancestor brain -> human brain (a natural FOOM), which the discussion sometimes suggests. My understanding is that it's more like the new adaptations allowed for cumulative cultural change, which allowed for more power.

Answer by seanrsonSep 20, 2022254


The misleading human-chimp analogy: AI will stand in relation to us the same way we stand in relation to chimps. I think this analogy basically ignores how humans have actually developed knowledge and power--not by rapid individual brain changes, but by slow, cumulative cultural changes. In turn, the analogy may lead us to make incorrect predictions about AI scenarios.

Answer by seanrsonSep 13, 202294

In addition to (farmed and wild) animal organizations, OPIS is worth checking out.

Answer by seanrsonSep 13, 202294

Here's a list of organizations focusing on the quality of the long-term future (including the level of suffering), from this post:

If you are persuaded by the arguments that the expected value of human expansion is not highly positive or that we should prioritize the quality of the long-term future, promising approaches include research, field-building, and community-building, such as at the Center on Long-Term Risk, Center for Reducing Suffering, Future of Humanity Institute, Global Catastrophic Risk Institute, Legal Priorities Project, and Open Philanthropy, and Sentience Institute, as well as working at other AI safety and EA organizations with an eye towards ensuring that, if we survive, the universe is better for it. Some of this work has substantial room for more funding, and related jobs can be found at these organizations’ websites and on the 80,000 Hours job board.

I found this to be a comprehensive critique of some of the EA community's theoretical tendencies (over-reliance on formalisms, false precision, and excessive faith in aggregation). +1 to Michael Townsend's suggestions, especially adding a TLDR to this post.

Longtermism + EA might include organizations primarily focused on the quality of the long-term future rather than its existence and scope (e.g., CLR, CRS, Sentience Institute), although the notion of existential risk construed broadly is a bit murky and potentially includes these (depending on how much of the reduction in quality threatens “humanity’s potential”)


Cool diagram! I would suggest rephrasing the Longtermism description to say “We should focus directly on future generations.” As it is, it implies that people only work on animal welfare and global poverty because of moral positions, rather than concerns about tractability, etc.

Answer by seanrsonAug 23, 2022140

Glad to have you here :D

I'm just going to plug some recommendations for suffering-focused stuff: You can connect with other negative utilitarians and suffering-focused people in this Facebook group, check out this career advice, and explore issues in ethics and cause prioritization here.

Julia Wise (who commented earlier) runs the EA Peer Support Facebook group, which could be good to join, and there are many other EA and negative utilitarian/suffering-focused community groups. Feel free to PM me!

Load more