Halstead

Comments

The Folly of "EAs Should"

I don't think there's any need to apologise! I was trying to make the case that I don't think you showed how we could distinguish reasonable and unreasonable uses of normative claims

AMA: Elizabeth Edwards-Appell, former State Representative

What do you think the next 4 years has in store for the US, especially concerning the probability of a major change in institutions and order there.

The Folly of "EAs Should"

Hi, thanks for the reply!

The argument now has a bit of a motte and bailey feel, in that case. In various places you make claims such as 

  • "The Folly of "EAs Should"
  • "One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid"; 
  • "So I think we should discuss why Effective Altruism implying that there are specific and clear preferable options for Effective Altruists is often harmful"; 
  • "Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is hard to justify. There are approaches to moral uncertainty that allow a resolution, but if EAs should cooperate, I argue that it may be useful, regardless of normative goals, to avoid normative statements that exclude some viewpoints."
  • "and conclusions based on the suppositions about key facts are usually unwarranted, at least without clear caveats about the positions needed to make the conclusions"

These seem to be claims to the effect that (1) we should (almost) never make normative claims (2) strong scepticism about knowing that one path is better from an EA point of view than another.  But I don't see a defence of either of these claims in the piece. For example, I don't see a defence of the claim that it is mistaken to think/say/argue that focusing on US policy or on GiveWell charities is not the best way to do the most good. 

If the claim is the weaker one that sometimes EAs can be overconfident in their view of the best way forward or use language that can be off-putting, then that may be right. But that seems different to the "never say that some choices EAs make are better than others" claim, which is suggested  elsewhere in the piece

The Folly of "EAs Should"

I think this is consistent with Will's definition because you can view the 'should' claims as what we should do conditional on us accepting the goal of doing the most good using reason and evidence. 

The Folly of "EAs Should"

Thanks for taking the time to put this together. 

At the start, you seem to suggest that we should not use 'should' because of moral uncertainty, and then you gloss this as a claim about cooperation. Moral uncertainty is intrapersonal, whereas moral cooperation is interpersonal. It might be the case that my credence is split between Theory 1 and Theory 2, but that everyone else has the exact same credal split. In this case, there is no need for interpersonal cooperation between people with conflicting moral beliefs because there is unanimity. Rather, the puzzle I face is to act under moral uncertainty, which is a very different point. 

In general, I think you have raised some sensible considerations about whether and how we might go about making EA more popular, such as around framing. But I think the idea that we should avoid talking about what EAs should do is untenable. Even while writing this comment, I have found it impossible not to say what EAs should do. Indeed, at several points in your post you make normative claims about what EA should do 

  • "So I think we should discuss why "Effective Altruism" implying that there are specific and clear preferable options for "Effective Altruists" is often harmful"
  • "Specifically, we should be wary of making the project exclusive rather than inclusive."
  • In the section on EA beyond small and weird, your argument is maybe EA should be big and weird.
  • In the section on fragmentation, if I have interpreted you correctly, you are saying some people should not be overconfident about their cause commitments given peer disagreement.
  • In the section on human variety, you say that EAs shouldn't have narrow career paths

Without making some normative claims about what EAs should and should not do, I don't see how EA could remain a distinctive movement. I just think it is true that EAs shouldn't donate to their local opera house, pet sanctuary, homeless shelter or to their private school, and that is what makes EA distinctive. Moreover, criticising the cause choices of EA actors just seems fundamental to the project. If our aim is to do the most good, then we should criticise approaches to that that seem unpromising. 

As an example, Hauke and I wrote a piece criticising GiveWell's reliance on RCTs. I took this to be an argument about what GiveWell or other EA research orgs should do with their staff time. How would you propose reframing this?

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

Hi, The A population and the Z population are both composed of merely possible future people, so person-affecting intuitions can't ground the repugnance. Some impartialist theories (critical level utilitaianism) are explicitly designed to avoid the repugnant conclusion. 

The case is analogous to the debate in aggregation about whether one should cure a billion headaches or save someone's life. 

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

Second comment, on your critique of Meacham...

As a (swivel-eyed) totalist, I'm loath to stick up for a person-affecting view, but I don't find your 'extremely radical implications' criticism of the view compelling and I think it is an example of an unpromising way of approaching moral reasoning in general. The approach I am thinking of here is one that  selects theories by meeting intuitive constraints rather than by looking at the deeper rationales for the theories. 

I think a good response for Meacham would be that if you find the rationale for his theory compelling, then it is simply correct that it would be better to stop everyone existing. Similarly, totalism holds that it would be good to make everyone extinct if there is net suffering over pleasure (including among wild animals). Many might also find this counter-intuitive. But if you actually believe the deeper theoretical arguments for totalism, then this is just the correct answer. 

I agree that Meacham's view on extinction is wrong, but that is because of the deeper theoretical reasons - I think adding happy people to the world makes that world better, and I don't see an argument against that in the paper. 

The Impossibility Theorems show formally that we cannot have a theory that satisfies people's intuitions about cases. So, we should not use isolated case intuitions to select theories. We should instead focus on deeper rationales for theories. 

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

Thanks a lot for taking the time to do this Arden, I found it useful. I have a couple of comments

Firstly, on the repugnant conclusion. I have long found the dominant dialectic in population ethics a bit strange. We (1) have this debate about whether merely possible future people are worthy of our ethical consideration and then (2) people start talking about a conclusion that they find repugnant because of aggregation of low quality lives. The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future; it is rather from the way totalism aggregates low quality lives. This repugnance is irrelevant to questions of population ethics. It's a bit like if we were talking about the totalist view of population ethics, and then people started talking about the experience machine or other criticisms of hedonism: this may be a valid criticism of totalism but it is beside the point - which is whether merely possible future people matter. 

Related to this:

(1) There are current generation perfect analogues of the repugnant conclusion. Imagine you could provide a medicine that provides a low quality life to billions of currently existing people or provide a different medicine to a much smaller number of people giving them brilliant lives. The literature on aggregation also discusses the 'headaches vs death' case which seems exactly analogous.

(2) For this reason, we shouldn't expect person-affecting views to avoid the repugnant conclusion. For one thing, some impartialist views like critical level utilitarianism, avoid the repugnant conclusion. For another thing, the A population and the Z population are merely possible future people so most person-affecting theories will say that they are incomparable. 

Meacham's view avoids this with its saturating relation in which possible future people are assigned counterparts. But (1) there are current generation analogues to the RC as discussed above, so this doesn't actually solve the (debatable) puzzle of the RC. 

(2) Meacham's view would imply that if the people in the much larger population had on average lives only slightly worse than people in the small population (A), then the smaller population would still be better. Thus, Meacham's view solves the repugnant conclusion but only by discounting aggregation of high quality lives, in some circumstances. This is not the solution to the repugnant conclusion that people wanted.

How modest should you be?

I agree that lots of these considerations are important. On 2) especially, I agree that being epistemically modest doesn't make things easy because choosing the right experts is a non-trivial task. One example of this is using AI researchers as the correct expert group on AGI timelines, which I have myself done in the past. AI researchers have shown themselves to be good at producing AI research, not at forecasting long-term AI trends, so it's really unclear that this is the right way to be modest in this case. 

On 4 also - I agree. I think coming to a sophisticated view will often involve deferring to some experts on specific sub-questions using different groups of experts. Like maybe you defer to climate science on what will happen to the climate, philosophers on how to think about future costs, economists on the best way forward, etc. Identifying the correct expert groups is not always straightforward. 

Strong Longtermism, Irrefutability, and Moral Progress

The benefits of GiveWell's charities are worked out as health or economic benefits which are realised in the future. e.g. AMF is meant to be good because it allows people who would have otherwise died to live for a few more years. If you are agnostic about whether everyone will go extinct tomorrow, then you must be agnostic about whether people will actually get these extra years of life. 

Load More