Sorted by New


Reducing long-term risks from malevolent actors

Probably you're already aware of this, but the APA's Goldwater rule seems relevant. It states:

On occasion psychiatrists are asked for an opinion about an individual who is in the light of public attention or who has disclosed information about himself/herself through public media. In such circumstances, a psychiatrist may share with the public his or her expertise about psychiatric issues in general. However, it is unethical for a psychiatrist to offer a professional opinion unless he or she has conducted an examination and has been granted proper authorization for such a statement.

From the perspective of this article, this rule is problematic when applied to politicians and harmful traits. (This is similar to how the right to confidentiality has the Duty to Warn exception.) A quick Google Scholar search gives a couple of articles since 2016 that basically make this point. For example, see Lilienfeld et al. (2018): The Goldwater Rule: Perspectives From, and Implications for, Psychological Science.

Of course, the other important (more empirical than ethical) question regarding the Goldwater rule is whether "conducting an examination" is a necessary prerequisite for gaining insight into a person's alleged pathology. Lilienfeld et al. also address this issue at length.

Scientific Charity Movement

I would guess there are many other related movements. For instance, I recently found this article about Comte. Much of it also sounds somewhat EA-ish:

[T]he socialist philosopher Henri de Saint-Simon attempted to analyze the causes of social change, and how social order can be achieved. He suggested that there is a pattern to social progress, and that society goes through a number of different stages. But it was his protégé Auguste Comte who developed this idea into a comprehensive approach to the study of society on scientific principles, which he initially called “social physics” but later described as “sociology.”

Comte was a child of the Enlightenment, and his thinking was rooted in the ideals of the Age of Reason, with its rational, objective focus. [...] He had seen the power of science to transform: scientific discoveries had provided the technological advances that brought about the Industrial Revolution and created the modern world he lived in. The time had come, he said, for a social science that would not only give us an understanding of the mechanisms of social order and social change, but also provide us with the means of transforming society, in the same way that the physical sciences had helped to modify our physical environment.

The article also says that Comte was supported monetarily by (famous utilitarian) John Stuart Mill and how he changed his mind in later life and started some religious movement.

I guess Scientific Charity Movement is special in that it (like EA) doesn't focus on systemic change.

Multiverse-wide cooperation in a nutshell

I agree that altruistic sentiments are a confounder in the prisoner's dilemma. Yudkowsky (who would cooperate against a copy) makes a similar point in The True Prisoner's Dilemma, and there are lots of psychology studies showing that humans cooperate with each other in the PD in cases where I think they (that is, each individually) shouldn't. (Cf. section 6.4 of the MSR paper.)

But I don't think that altruistic sentiments are the primary reason for why some philosophers and other sophisticated people tend to favor cooperation in the prisoner's dilemma against a copy. As you may know, Newcomb's problem is decision-theoretically similar to the PD against a copy. In contrast to the PD, however, it doesn't seem to evoke any altruistic sentiments. And yet, many people prefer EDT's recommendations in Newcomb's problem. Thus, the "altruism error theory" of cooperation in the PD is not particularly convincing.

I don't see much evidence in favor of the "wishful thinking" hypothesis. It, too, seems to fail in the non-multiverse problems like Newcomb's paradox. Also, it's easy to come up with lots of incorrect theories about how any particular view results from biased epistemics, so I have quite low credence in any such hypothesis that isn't backed up by any evidence.

before I’m willing to throw out causality

Of course, causal eliminativism (or skepticism) is one motivation to one-box in Newcomb's problem, but subscribing to eliminitavism is not necessary to do so.

For example, in Evidence, Decision and Causality Arif Ahmed argues that causality is irrelevant for decision making. (The book starts with: "Causality is a pointless superstition. These days it would take more than one book to persuade anyone of that. This book focuses on the ‘pointless’ bit, not the ‘superstition’ bit. I take for granted that there are causal relations and ask what doing so is good for. More narrowly still, I ask whether causal belief plays a special role in decision.") Alternatively, one could even endorse the use of causal relationships for informing one's decision but still endorse one-boxing. See, e.g., Yudkowsky, 2010; Fisher, n.d.; Spohn, 2012 or this talk by Ilya Shpitser.

Against neglectedness

A few of the points made in this piece are similar to the points I make here:

For example, the linked piece also argues that returns may diminish in a variety of different ways. In particular, it also argues that the returns diminish more slowly if the problem is big and that clustered value problems only produce benefits once the whole problem is solved.