S

seanrson

393 karmaJoined Jun 2020

Bio

"Do not avoid suffering or close your eyes before suffering. Do not lose awareness of the existence of suffering in the life of the world. Find ways to be with those who are suffering, including personal contact, visits, images and sounds. By such means, awaken yourself and others to the reality of suffering in the world." - Thích Nhất Hạnh

Comments
44

I think the objection comes from the seeming asymmetry between over-attributing and under-attributing consciousness. It's fine to discuss our independent impressions about some topic, but when one's view is a minority position and the consequences of false beliefs are high, isn't there some obligation of epistemic humility?

Maybe the examples are ambiguous but they don't seem cherrypicked to me. Aren't these some of the topics Yudskowky is most known for discussing? It seems to me that the cherrypicking criticism would apply to opinions about, I don't know, monetary policy, not issues central to AI and cognitive science.

Hey Jack! In support of your view, I think you'd like some of Magnus Vinding's writings on the topic. Like you, he expresses some skepticism about focusing on narrower long-term interventions like AI safety research (vs. broader interventions like improved institutions).

Against your view, you could check out these two (i, ii) articles from CLR.

Feel free to message me if you'd like more resources. I'd love to chat further :)

How about Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans?

Oh totally (and you probably know much more about this than me). I guess the key thing I'm challenging is the idea that there was something like a very fast transfer of power resulting just from upgraded computing power moving from chimp-ancestor brain -> human brain (a natural FOOM), which the discussion sometimes suggests. My understanding is that it's more like the new adaptations allowed for cumulative cultural change, which allowed for more power.

Answer by seanrsonSep 20, 202225
5
2

Psychology/anthropology:

The misleading human-chimp analogy: AI will stand in relation to us the same way we stand in relation to chimps. I think this analogy basically ignores how humans have actually developed knowledge and power--not by rapid individual brain changes, but by slow, cumulative cultural changes. In turn, the analogy may lead us to make incorrect predictions about AI scenarios.

In addition to (farmed and wild) animal organizations, OPIS is worth checking out.

Here's a list of organizations focusing on the quality of the long-term future (including the level of suffering), from this post:
 

If you are persuaded by the arguments that the expected value of human expansion is not highly positive or that we should prioritize the quality of the long-term future, promising approaches include research, field-building, and community-building, such as at the Center on Long-Term Risk, Center for Reducing Suffering, Future of Humanity Institute, Global Catastrophic Risk Institute, Legal Priorities Project, and Open Philanthropy, and Sentience Institute, as well as working at other AI safety and EA organizations with an eye towards ensuring that, if we survive, the universe is better for it. Some of this work has substantial room for more funding, and related jobs can be found at these organizations’ websites and on the 80,000 Hours job board.

I found this to be a comprehensive critique of some of the EA community's theoretical tendencies (over-reliance on formalisms, false precision, and excessive faith in aggregation). +1 to Michael Townsend's suggestions, especially adding a TLDR to this post.

Longtermism + EA might include organizations primarily focused on the quality of the long-term future rather than its existence and scope (e.g., CLR, CRS, Sentience Institute), although the notion of existential risk construed broadly is a bit murky and potentially includes these (depending on how much of the reduction in quality threatens “humanity’s potential”)

Load more