Hi all, I'm currently working on a contribution to a special issue of Public Affairs Quarterly on the topic of "philosophical issues in effective altruism". I'm hoping that my contribution can provide a helpful survey of common philosophical objections to EA (and why I think those objections fail)—the sort of thing that might be useful to assign in an undergraduate philosophy class discussing EA.

The abstract:

Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but every decent person should share the basic goals or values underlying effective altruism.

I cover:

  • Five objections to moral prioritization (including the systems critique)
  • Earning to give
  • Billionaire philanthropy
  • Longtermism; and
  • Political critique.

Given the broad (survey-style) scope of the paper, each argument is addressed pretty briefly. But I hope it nonetheless contains some useful insights. For example, I suggest the following "simple dilemma for those who claim that EA is incapable of recognizing the need for 'systemic change'":

Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. If it does, then EA principles straightforwardly endorse attempting to promote systemic change. If it does not, then by their own lights they have no basis for thinking it a better option. In neither case does it constitute a coherent objection to EA principles.

On earning to give:

Rare exceptions aside, most careers are presumably permissible. The basic idea of earning to give is just that we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings. There can thus be excellent altruistic reasons to pursue higher pay. This claim is both true and widely neglected. The same may be said of the comparative claim that one could easily have more moral reason to pursue "earning to give" than to pursue a conventionally "altruistic" career that more directly helps people. This comparative claim, too, is both true and widely neglected. Neither of these important truths is threatened by the deontologist's claim that one should not pursue an impermissible career. The relevant moral claim is just that the directness of our moral aid is not intrinsically morally significant, so a wider range of possible actions are potentially worth considering, for altruistic reasons, than people commonly recognize.

On billionaire philanthropy:

EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth, and may dislike EA for highlighting it. But I do not think it is objectionable to acknowledge relevant facts, even when politically inconvenient... Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.

I still have time to make revisions -- and space to expand the paper if needed -- so if anyone has time to read the whole draft and offer any feedback (either in comments below, or privately via DM/email/whatever), that would be most welcome!

47

0
0

Reactions

0
0

More posts like this

Comments71
Sorted by Click to highlight new comments since: Today at 2:53 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

On page 27, you clarify that many concepts in the article are not core to EA, but are "specific ideas contingently associated with EA, such as earning to give and life-affirming longtermism" that could be rejected "while still embracing the core of effective altruism." I think it would be helpful to distinguish core commitments from non-core issues early on the article.

I would also consider toning down some of the strong rhetorical claims, like "every decent person." You'd need much more space to cover every potential objection to EA's philosophical underpinnings and potentially be able to substantiate this claim to that level of confidence. Moreover, the reader knows they are reading a journal volume on philosophical issues in EA, which implies that the journal editors at least think there are plausible philosophical criticisms. Likely the reader knows that other contributors have identified what they think are substantial philosophical problems, and some EA principles do not align with the assumptions a reader new to EA likely has. 

All that is to say that I think a tone of "core concepts are obviously right, and no decent person would argue otherwise" would lead most neutral readers to conclude (1) that you're setting up strawmen, or (2) that you're defining the "core ideas" broadly enough to almost be truisms, leaving a lot of the heavy lifting to be done by unclearly-defined "details of implementation."

I think this paper is weak from the outset in similar ways to the entire philosophical project of EA overall. You start with the definition of EA as "the project of trying to find the best ways of helping others, and putting them into practice". In that definition "the best" means "the most effective", which is one of the ways in which EA arguments rhetorically load the dice. If I don't agree that the most effective way to help people (under EA definitions) is always and necessarily the best way to help people, then the whole paper is weakened. Essentially, one ends up preaching to the choir - which is fine if that's what one wants to do, of course.

I take issue with a number of the arguments in the paper, but I have no desire to respond to the entire thing. However I will focus on the part of the Moral Prioritisation section that quotes Mark Goldring of Oxfam - not because I'm a fan of him or Oxfam, which I am not, but because your misinterpretation of his position is quite illustrative. You claim that "Goldring seems to be implying that so long as we help some children in each country, it does not matter how many children we end up abandoning", but this is not the argument or an i... (read more)

8
Richard Y Chappell
1y
I'm very puzzled by this comment. Your characterization of Goldring's argument is precisely the argument I'm responding to, so I'm confused that you present this as though you think I am interpreting Goldring as saying something different.  I argue that an objectionable implication of Goldring's position (and yours) is that we should abandon a larger group of children because they are in a country (Bangladesh) for which we have already helped some other children. You haven't responded to my argument at all.
8
Paul Currion
1y
Thank you for replying, although I admit to being equally puzzled by your puzzlement. What Goldring is paraphrased as saying is that "For a certain cost, the charity might enable only a few children to go to school in a country such as South Sudan, where the barriers to school attendance are high, he says; but that does not mean it should work only in countries where the cost of schooling is cheaper, such as Bangladesh, because that would abandon the South Sudanese children." Goldring is not "implying that so long as we help some children in each country, it does not matter how many children we end up abandoning". I simply don't see where you get that from. It's just not the argument that he's making. His argument is that the needs of children in South Sudan and Bangladesh are equally important, that the foundation for Oxfam's work is needs rather than costs, and that the accident of birth that placed a child in South Sudan and not Bangladesh is thus not a justification to abandon the former. What Goldring does imply is that applying "EA principles" would require Oxfam to abandon all the children of South Sudan - and probably for every aid organisation to abandon the entire country, since South Sudan is a difficult and costly working environment. In this case "quantity has a quality all of its own" - the argument that justifies abandoning 100 children in one country in favour of 1000 children in another looks markedly different when it's used to justify withdrawing all forms of assistance from an entire country. This highlights the conflict between EA's approach - which takes "effectiveness" (specifically cost-effectiveness) as an intrinsic rather than instrumental value - and the framework used by others, who have other intrinsic values. That conflict is the reason why we may be talking past each other - I recognise that you probably won't agree with this argument, and may continue to be puzzled. I would suggest to you that this is the fundamental weakness of t

How far are you willing to push this? Presumably, you wouldn't educate 1 child in South Sudan and 10 in Bangladesh, rather than 0 in Sudan and 10 000 in Bangladesh, just so that you can say South Sudan hasn't been abandoned? So exactly how many more children have to go without education before you say "that's too many more" and switch to one country? What could justify a particular cut-off?  

3
Paul Currion
1y
I'm not a utilitarian, so I reject the premise of this question when presented in the abstract as it is here. Effectiveness for me is an instrumental value, so I would need to have a clearer picture of the operating environments in both countries and the funding environment at the global level before I would be able to answer it.
3
zchuang
1y
Just because you're not a utilitarian doesn't mean you can reject the premise of the question. Deontologists have the same problem with trade offs! The premise of the question is one even the Oxfam report accepts. I also don't think you know what an instrumental value is. I think you keep throwing the term out but don't understand what it means in terms of how it is frames the instrumental empirical question in a way that other values dissolve. 
4
Paul Currion
1y
Can you give me an argument for why I can't reject the premise of the question, rather than just telling me I can't? I've explained why I reject it in these comments. Goldring "accepts" the premise only in the sense that he's attending an event which is based entirely on that premise, and has had that premise forced onto him through the rhetorical trick which I described in my reply to Chappell. I think you're partly right about my confusion about instrumental values. Now that I reconsider, the humanitarian principles are a strange mix of instrumental and intrinsic values; regardless, effectiveness remains solely an instrumental value. Perhaps you could explain what you mean by "other values dissolve"?
0
zchuang
1y
Reasons why you can't reject the premise: 1. Trade-offs inhere in all ethical systems so "rejecting utilitarianism" doesn't do the work you think it does. The values you listed up in the thread that "inhere" in  2. The actual premise you're you're rejecting is one you rely on, that of equal moral consideration of peoples. Each time you manipulate the ratio of tradeoff by rejecting "cost-effectiveness" you are breaking treating people as morally equivalent.  Reasons you actually can reject the premise: 1. Actions that are upside bargains. E.g. break the trade off by having both options done but this is not the nature of aid as it currently is. I think what you think you're doing by saying you're not a utilitarian is saying that you care about things EAs don't care about in the impact of aid. But even with other values you create different ratios of trade offs and Pareto Optimality such that you're always trading off something even if it's not utilitarianism. It's still something that is a cost and something that is a benefit. There's no rhetorical trick here just the fungible nature of cash. The fact that cost effectiveness isn't an intrinsic value is what makes it a deciding force in the ratio of trade offs in other values.
2
Paul Currion
1y
Can you explain what you mean by "There's no rhetorical trick here just the fungible nature of cash"? In practice cost effectiveness is a deciding force but not the deciding force.
1
zchuang
1y
I think what you're saying. There are a plurality of values that EAs don't seem to care about that are deeply important and are skipped over through naive utilitarianism. These values cannot be measured through cost-effectiveness because they are deeply ingrained in the human experience. The stronger version that I think you're trying to elucidate but are unable to clearly is that cost-effectiveness can be inversely correlated with another value that is "more" determinant on a moral level. E.g. North Koreans cost a lot more to help than Nigerians with malaria but their cost effectiveness difficulty inheres in their situation and injustice in and of itself. What I am saying is that insofar as we're in the realm of charity and budgets and financial tradeoffs it doesn't matter what your intrinsic value commitments are. There are choices that produce more of that value or less of that value which is what the concept of cost effectiveness is. Thus, it is a crux no matter what intrinsic value system you pick. Even deontology has these issues which I noted in my first response to you.
3
Paul Currion
1y
Thanks, yes. I think I'm elucidating it pretty clearly, but perhaps I'm wrong! As I've said, I'm not denying that cost effectiveness is a determinant in decision-making - it plainly is a determinant, and an important one. What I am claiming is that it is not the primary determinant in decision-making, and simple calculus (as in the original thought experiment) is not really useful for decision-making.
1
Paul Currion
1y
The premise I reject is not that there are always trade-offs, but that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how "best" to help people.
2
David Mathers
1y
What is "the premise" that you reject?

The premise that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how "best" to help people. As I've said in another comment, the trolley problem was meant as a stimulus to discussion, not as a guide for making policy decisions around public transport systems.

EDIT: I realise that this description may come across as harsh on a forum populated almost entirely by utilitarians, but I felt that it was important to be clear about the exact nature of my objection. My position is that I agree that utilitarianism should be a tool in our ethical toolkit, but I disagree that it is the tool that we should reach for exclusively, or even first of all.

1
David Mathers
1y
How can we discuss whether or not it makes sense to help more people over less without discussing cases where more/less people are helped? 
3
Paul Currion
1y
I suppose that part of my point is that we may not be discussing whether or not it makes sense to help more people over less. We may be discussing how we can help people who are most in need, who may cost more or less to help than other people. I've claimed that naive utilitarian calculus is simply not that useful in guiding actual policy decisions. Those decisions - which happen every day in aid organisations - need to include a much wider range of factors than just numbers. If we keep it in the realm of thought experiments, it's a simple question and an obvious answer. But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?
6
David Mathers
1y
'But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?' No, of course not. But in assessing the real world problem, you seemed to be relying on some sort of claim that sometimes it better to help less people if it means a fairer distribution of help. So I was raising a problem for that view: if you think it is sometimes better to distribute money to more countries even though it helps less people, then either that is always better in any possible circumstance, realistic or otherwise or its sometimes better and sometimes not depending on circumstance. Then the thought experiment comes in to show that there are possible, albeit not very realistic circumstances where it clearly isn't better. So that shows one of the two options available to someone with your view is wrong.  Then, I challenged the other option that it is sometimes better and sometimes not, but the thought experiment wasn't doing any work there. Instead, I just asked what you think determines when it is better to distribute the money more evenly between countries versus when it is better to just help the most people, and implied that this is a hard question to answer. As it happens, I don't actually think that this view is definitely wrong, and you have hinted at a good answer, namely that we should sometimes help less people in order to prioritize the absolutely worst off. But I think it is a genuine problem for views like this that its always going to look a bit hazy what determines exactly how much you should prioritize the best of, and the view does seem to imply there must be an answer to that. 
7
Paul Currion
1y
I think we need to get away from “countries” as a frame - the thought experiment is the same whether it’s between countries, within a country, or even within a community. So my claim is not that “it is sometimes better to distribute money to more countries even though it helps less people”. If we take the Bangladeshi school thought experiment - that with available funding, you can educate either 1000 boys or 800 girls, because girls face more barriers to access education - my claim is obviously not that “it is sometimes better to distribute money to more genders even though it helps less people”. You could definitely describe it that way - just as Chappell describes Goldring’s statement - but that is clearly not the basis of the decision itself, which is more concerned with relative needs in an equity framework. You are right to describe my basis for making decisions as context-specific. It is therefore fair to say that I believe that in some circumstances it is morally justified to help fewer people if those people are in greater need. The view that this is *always* better is clearly wrong, but I don’t make that assessment on the basis of the thought experiment, but on the basis that moral decisions are almost always context-specific and often fuzzy around the edges. So while I agree that it is always going to look a bit hazy what determines your priorities, I don’t see it as a problem, but simply as the background against which decisions need to be made. Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?
5
David Mathers
1y
'Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?' Yes, indeed, I think I agree with everything in this last post. In general non-utilitarian views tend to capture more of what we actually care about at the cost of making more distinctions that look arbitrary or hard to justify on reflection. It's a hard question how to trade off between these things. Though be careful not to make the mistake of thinking utilitarianism implies that the facts about what empirical effects an action will have are simple: it says nothing about that at all.   Or at least, I think that, technically speaking, it is true that "it is sometimes better to distribute money to more genders even though it helps less people" is something you believe, but that's a highly misleading way of describing your view: i.e. likely to make a reasonable person who takes it at face value believe other things about you and your view that are false.  I think the countries thing probably got this conversation off on the wrong foot, because EAs have very strong opposition to the idea that national boundaries ever have moral significance. But it was probably the fault of Richard's original article that the conversation started there, since the charitable reading of Goldring was that he was making a point about prioritizing the worst off and using an example with countries to illustrate that, not saying that it's inherently more fair to distribute resources across more countries.  As a further point: EAs who are philosophers likely are aware, when they are being careful and reflective, that some people reasonably think that it is better to help a person the worse off they are, since the philosopher Derek Parfit, who is one of the intellectual founders of EA, invented a particular famous variant of that view: https://oxfordre.com/politics/politics/view/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-e-232 My guess (though it is
1
Paul Currion
1y
Likewise I think I agree with everything in this post. I appreciate that you took the time to engage with this discussion, and for finding grounds for agreement at least around the hazy edges.
3[anonymous]1y
Thanks to you and @Dr. David Mathers for this useful discussion. 
1
zchuang
1y
Wait I just want to make an object level objection for the third party readers that most policy-making is guided by cost-benefit analysis and the assigning of value of statistical life (VSL) in most liberal democracies. 
4
Paul Currion
1y
To clarify your objection: such policy-making is guided by, but not solely determined by, such approaches.
5
Richard Y Chappell
1y
What do you mean by "not... good faith"? I take that to imply a lack of intellectual integrity, which seems a pretty serious (and insulting) charge. I don't take Goldring to be arguing in bad faith -- I just think his position is objectively irrational and poorly supported. If you think my arguments are bad, you're similarly welcome to explain why you believe that, but I really don't think anyone should be accusing me of failing to engage in good faith. On to the substance: you (and Goldring) are especially concerned not to "withdraw all... assistance from an entire country." You would prefer to help fewer children, some in South Sudan and some in Bangladesh, rather than help a larger number of children in Bangladesh. When you help fewer people, you are thereby "abandoning", i.e. not helping, a larger number of people.  Does it matter how many more we could help in Bangladesh? It doesn't seem to matter to you or Goldring. But that is just to say that it does not matter how many (more) children we end up abandoning, on your view, so long as we help some in each country.  That's the implication of your view, right?  Can you explain why you think this isn't an accurate characterization? ETA: I realize now there's a possible reading of the "it doesn't matter" claim on which it could be taken to impute a lack of concern even for Pareto improvements, i.e. saving just one person in each country being no better than 10 people in each country. I certainly don't mean to attribute that view to Goldring, so will be sure to reword that sentence more carefully!
-6
Paul Currion
1y
5
zchuang
1y
I don't think you're understanding what EAs truly object to though. If the problem is the moral arbitrariness and moral luck of South Sudan vs. Bangladesh then you end up having to prioritise. EA works on the margins so the argument conditionally breaks at the point quantity has a quality all of its own.  If borders and the birth lottery are truly arbitrary I don't understand why it would be so bad to "abandon" a country if there are equally needs for kids of each country. In the same way typical humanitarians are ok with donations moved from the first world to the developing world. To put inversely your example, the argument that justifies funding every single country because they are distinct categories also justifies abandoning 1000 children in one country for 100 children in another country. If anything your example weighs on the fact South Sudan and Bangladesh feel worthy on both ends so it feels intuitive. But the categories of countries themselves are wonderfully arbitrary, South Sudan did not exist until 2011! Moreover, I wish you defended another intrinsic value that could be isolated away from cost-effectiveness. Is it a desserts claim that the most difficult places to administer aid are also the most "needy" and therefore deserve it more even if it costs more? 
1
Paul Currion
1y
I'm not sure what the last sentence of your first paragraph means - can you explain it for me? For most of the rest of your comment, I'd refer you to my other answer at https://forum.effectivealtruism.org/posts/ShCENF54ZN6bxaysL/why-not-ea-paper-draft?commentId=o4q6AFoKt7kDpN5cD. I don't know if that answers your points, but it should clarify a little. The intrinsic values that I would point to in this context are the humanitarian principles of humanity, neutrality, impartiality and independence. (However I should note that these are the subject of continual debate, and neutrality in particular has come under serious pressure during the Ukraine war.) 
2
zchuang
1y
Also to be clear, "humanity, neutrality, impartiality and independence" aren't values as most philosophers know of them. Neutrality and impartiality are not ones you seem to defend above which is why people find you to be confused.
1
Paul Currion
1y
Yes, you're absolutely right. Academic philosophy has largely failed to engage with contemporary humanitarianism, which is puzzling given that the field of humanitarianism provides plenty of examples of actual moral dilemmas. That failure is also what leads to the situation we have now, where an academic paper that wants to engage with that topic lacks the language to describe it accurately. This might be because the ethics of humanitarian action is (broadly) a species of virtue ethics, in which those humanitarian principles are the values that need to be cultivated by individuals and organisations in order to make the sort of utilitarian, deontological or other ethical decisions that we are using as thought experiments here, guided by the sort of "practical wisdom" that is often not factored into those thought experiments.
1
zchuang
1y
I think the problem is actually reversed. Most humanitarian organisations do not have firm foundational beliefs and are about using poverty porn and feelings of the donor to guide judgements. The language you use of the value of "humanity" is a non-sequitur and doesn't provide information -- even those with high status in humanitarian aid circles like Rory Stewart express a lot of regret over this fuzziness. Put sharply, I don't think contemporary humanitarianism has language to describe itself accurately and "humanity, neutrality, impartiality and independence" are not values but rather buzzwords for charity reports and pamphlets.  From what I've inferred is that you're some sort of Bernard Williams type moral particularism instead of virtue ethics in that you think there are morally salient facts everywhere on the ground in these cases and that the configuration of the morally relevant features of the action in a particular context. But the problem in this discourse is you won't name the thing you're defending because I don't think you know what exactly your moral system is beyond being against thought experiments and vibes of academic philosophy.
1
Paul Currion
1y
This is definitely an uncharitable reading of humanitarian action. The humanitarian principles are rarely to be found in "charity reports and pamphlets" (by which I assume you mean public-facing documents) and if they are found there, they are not the focus of those documents at all. The exception would be for the ICRC, for the obvious reason that the principles largely originated in their work and they act as stewards to some extent. Your characterisation of humanitarian organisations as "using poverty porn and feelings of the donor to guide judgements" and so on - well, you're welcome to your opinion, but that clearly obviates the hugely complex nature of decision-making in humanitarian action. Humanitarian organisations clearly have foundational beliefs, even if they're not sufficiently unambiguous for you. The world is unfortunately an ambiguous place. (I should explain at this point that I am not a full-throated and unapologetic supporter of the humanitarian sector. I am in fact a sharp critic of the way in which it works, and I appreciate sharp criticism of it in general. But that criticism needs to be well-informed rather than armchair criticism, which I suppose is why I'm in this thread!) I do in fact practice virtue ethics, and while there is some affinity between humanitarian decision-making and moral particularism, there are clearly moral principles in the former which the latter might deny - the principle of impartiality means that one is required to provide assistance to (for example) genocidaires from Rwanda when they find themselves in a refugee camp in Tanzania, regardless of what criminal actions they might have carried out in their own country. I'm not sure what you mean when you say that I won't name the thing defending because I don't know what my moral system is. My personal moral framework is one of virtue ethics, taking its cue from classical virtue ethics but aware that the virtues of the classical age are not necessarily best for flouris
-4
zchuang
1y
Ok to be clear, I am 100% certain you don't know what virtue ethics is because you're literally describing principles of action not virtues. Virtues in virtues ethics are dispositions we cultivate in ourselves not in the consequence of the world. So taking your example of the "principle of impartiality" is that if you are a virtue ethicist you're trying to cultivate "impartiality" not duty bound by it. This is also why you're confused when you name virtues because independence is a virtue in the person receiving aid not in you! Also these are canonically not virtues any well-known virtue ethicist would name! Moreover, this impartiality is more a metaethical principle that you keep violating in your own examples. If Oxfam trades off 2:1 Bangladeshis to South Sudanese (replace the countries with whatever you want) that breaks impartiality because you are necessarily saying one life is worth more than another (there are morally particular facts that can change this obviously but you keep biting the bullet on any and just say the world is fuzzy!) Overall, the world is fuzzy but the problem in this chain of logic is your fuzziness in understanding of what commonly used concepts like virtue ethics are. It's really frustrating when you keep excusing your mistaken understanding of concepts with the world being fuzzy. Please just go read Alastair McIntyre's After Virtue.
-3
Paul Currion
1y
“I am 100% certain you don't know what virtue ethics is because you're literally describing principles of action not virtues… Virtues in virtues ethics are dispositions we cultivate in ourselves not in the consequence of the world.” I fear that it may be you who do not know what virtue ethics is. You refer to McIntyre, who defines virtues as qualities requiring both possession *and* exercise. One does not become courageous by sitting at home thinking about how courageous one will become, but by practising acts of courage. Virtues are developed through such practice, which surely means that they are principles of action. ”Also these are canonically not virtues any well-known virtue ethicist would name!” I agree. I haven’t claimed that they are, and I’ve referred to humanitarian ethics as a species of virtue ethics for that very reason. But one of the strengths of virtue ethics is that it is possible - indeed necessary - to update what the virtues mean in practice to account for the way in which the social environment has changed - and in fact there’s no reason why one shouldn’t introduce new virtues that may be more appropriate for human flourishing. “This is also why you're confused when you name virtues because independence is a virtue in the person receiving aid not in you!... Moreover, this impartiality is more a metaethical principle that you keep violating in your own examples. If Oxfam trades off 2:1 Bangladeshis to South Sudanese (replace the countries with whatever you want) that breaks impartiality because you are necessarily saying one life is worth more than another” I believe you are confused here. Independence is not a virtue of the person receiving aid but of the organisation providing aid - and here I’ll use the ICRC as the exemplar - which “must always maintain their autonomy so that they may be able at all times to act in accordance with the principles”. Likewise you are confused about what is meant by impartiality, which requires that the org
1
zchuang
1y
This is not how words work. You can't just say I believe X is a virtue because in humanitarian ethics (which is ill-defined). I truly don't think you understand the concept of virtue ethics at the end of the day. This sounds mean by it's definitionally a misunderstanding you keep doubling down on like everything here. For instance you tried to use the red cross as an example but most virtue ethicists wouldn't abide by an entity holding a virtue (the ICRC can't cultivate a virtue it's not a person) -- because that's definitionally not what a virtue is. You also misquoted Alasdair McIntyre and misrepresented it as shown by the fact your quoting all come from google book snippets from undergraduate classes. I think you believe what you believe and I'll leave it at that. This is not a productive conversation. Funnily enough I do not think the paper draft is charitable but I don't think you fully understand your axiomatic values (you probably are prioritarian not a virtue ethicist). I also think the educating girls example is a very strong prioritarian argument. [edited for tone]
1
Paul Currion
1y
“You can't just say I believe X is a virtue because in humanitarian ethics (which is ill-defined). I truly don't think you understand the concept of virtue ethics at the end of the day… You also misquoted Alastair MacIntyre and misrepresented it.” Let me then quote MacIntyre in full, to avoid misrepresenting him. 1. MacIntyre defines a practice as “any coherent and complex form of socially established cooperative human activity through which goods internal to that form of activity are realized in the course of trying to achieve those standards of excellence which are appropriate to, and partially definitive of, that form of activity”. MacIntyre gives a range of examples of practices, including the games of football and chess, professional disciplines of architecture and farming, scientific enquiries in physics, chemistry and biology, creative pursuits of painting and music, and “the creation and sustaining of human communities - of households, cities, nations”. Humanitarian action meets this definition of a practice. 2. MacIntyre defines a good with reference to their conception in the middle ages as “The ends to which men as members of such a species move… and their movement towards or away from various goods are to be explained with reference to the virtues and vices which they have learned or failed to learn and the forms of practical reasoning which they employ.” The humanitarian imperative “that action should be taken to prevent or alleviate human suffering arising out of disaster or conflict” meets this definition of a good. 3. MacIntyre defines a virtue as “an acquired human quality the possession and exercise of which tends to enable us to achieve those goods which are internal to practices and the lack of which effectively prevents us from achieving any such goods”. Humanitarian principles can be treated as virtues under this definition. They are acquired human qualities which enable us to achieve a good (the human imperative) which is internal t
5
Moya
1y
When seeing the title of this post I really wanted to like it, and I appreciate the effort that went into it all so far. Unfortunately, I have to agree with Paul - both the post as well as the paper draft itself read pretty weak to me. In many instances, it seems that you argue against strawpeople rather than engaging with criticism of EA in good faith, and even worse, the arguments you use to counter the criticism boil down to what EA is advocating for “obviously” being correct (you wrote in the post that the arguments are very much shortened because there is just so much ground to cover, but I believe that if an argument cannot be made in a convincing way, we should either focus more time on making it properly, or dropping the discussion entirely, rather than just vaguely pointing towards something and hoping for the best.) Also, you seem to not defend all of EA, but whatever part of EA that is most easily defendable in the particular paragraph, such as arguing that EA does not require people to always follow its moral implications, only sometimes - which some EAers might agree with, but certainly not all.

Can you mention some places where you think he has strawmanned people and what you think the correct interpretation of them is? 

4
pseudonym
1y
This is more of a misread than a strawman, but on page 8 the paper says: I don't think saying that Adams, Crary, and Gruen "illegitimately presuppose that “complicity” with suboptimal institutions entails net harm" is correct. The paper misunderstands what they were saying. Here's the full sentence (emphasis added): I interpret it as saying: In other words, it is an empirical claim that the way EA is carried out in practice has some counterproductive results. It is not a normative claim about whether complicity with suboptimal institutions is ever okay. 
4
Richard Y Chappell
1y
But they never even try to argue that EA support for "the very social structures that cause suffering" does more harm than good. As indicated by the "thereby", they seem to take the mere fact of complicity to suffice for "undermining its efforst to 'do the most good'." I agree that they're talking about the way that EA principles are "actualized". They're empirically actualized in ways that involve complicity with suboptimal institutions. And the way these authors argue, they take this fact to suffice for critique.  I'm pointing out that this fact doesn't suffice. They need to further show that the complicity does more harm than good.
3
Moya
1y
Here is my criticism in more detail: It starts here in the abstract - writing this way immediately sounds condescending to me, making disagreement with EA sound like an entirely unreasonable affair. So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms. On systemic change: The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hills - higher maxima - out there, but we do not know how to get there; any particular systemic change might as well make things worse. But if EA principles told us to only ever sit at this local maximum and never even attempt to go anywhere else, then those would not be principles I would be happy following. So yes, people who support systemic change often do not have the mathematical basis to argue that it necessarily will be a good deal - but that does not mean that there is no basis for thinking attempting it is a good option. Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument. On earning to give: Again, the arguments are very simplified here. A career being permissible or not is not a binary choice, true or false. It is a gradient, and it fluctuates and evolves over time, depending on how what you are asked to do on the job fluctuates over time, and depending on how the ambient morality of yourself and society shifts over time. So the question is not "among all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?" but "what is the tradeoff I should be willing to make between the career being more morally iffy, and the positive impact I can have by donation from a larger income baseline?", and additionally, if you still just donate e.g. 10% of your income but your income is hi
2
Richard Y Chappell
1y
That's a non-sequitur. There's no inconsistency between holding a certain conclusion -- that "every decent person should share the basic goals or values underlying effective altruism" -- and "honestly engaging with criticisms". I do both. (Specifically, I engage with criticisms of EA principles; I'm very explicit that the paper is not concerned with criticisms of "EA" as an entity.) I've since reworded the abstract since the "every decent person" phrasing seems to rub people the wrong way. But it is my honest view. EA principles = beneficentrism, and rejecting beneficentrism is morally indecent. That's a view I hold, and I'm happy to defend it. You're trying to assert that my conclusion is illegitimate or "dishonest", prior to even considering my supporting reasons, and that's frankly absurd. Yes, and my "whole point" is to respond to this by observing that one's total evidence either supports the gamble of moving in a different direction, or it does not. You don't seem to have understood my argument, which is fine (I'm guessing you don't have much philosophy background), but it really should make you more cautious in your accusations. It's all about uncertainty -- that's what "in expectation" refers to. I'm certainly not attributing certainty to the proponent of systemic change -- that would indeed be a strawperson, but it's an egregious misreading to think that I'm making any such misattribution. (Especially since the immediately preceding paragraphs were discussing uncertainty, explicitly and at length!) Again, I think this is just a result of your not being familiar with the norms of philosophy. Philosophers talk about true claims all the time, and it doesn't mean that they're failing to engage honestly with those who disagree with them. Now this is a straw man! The view I defend there is rather that "we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings." Reasons always need t
0
Richard Y Chappell
1y
This criticism suggests that you have not understood the point of the paper. I'm defending the core ideas behind EA. It's just a basic logical point that defending EA principles as such does not require defending the more specific views of particular EAs. This is far too vague to be helpful (and so comes off as gratuitously insulting).  What instances?  Which of my specific counterarguments do you find unpersuasive, and why?  I do indeed conclude that the core principles of EA are undeniably correct. I never claim that any specific causes EAs "advocate for" are even correct at all, let alone obviously so. I agree with that methodological claim. (I flag the brevity just to indicate that there is, of course, always more that could be said. But I wouldn't say what I do if I didn't think it was productive and important, even in its brief form.) I believe that I made convincing arguments that go beyond "vaguely pointing... and hoping for the best." Perhaps you could apply this same methodological principle to your own comments.
1
Moya
1y
I understand that my vague criticism was unhelpful; sadly, when posting I did not have enough time to really point out specific instances, and thought it would still be higher value to mention it in general than to just not write anything at all. I will try to find the time now to write down my criticisms in more detail, and once I am ready will comment then on the question of Dr. David Mathers above, as he also asked for it (and by commenting here and there, you both will be notified. Hooray.)
2
Sanjay
1y
I was confused by the first paragraph of Paul's comment. * Is it saying that EA assumes that "the best" way to help people = "the most effective" way to help people? * If so, could you please define what you meant "best" and "effective"? I get the impression Paul has some distinction in mind, but I don't understand what it is. (Paragraph copied below)
3
Paul Currion
1y
Yes, I am claiming that when Effective Altruism is defined as "trying to find the best ways" what it really means is "trying to find the most effective ways". As far as I can tell the reasons for using "the best" are to avoid a circular definition ("Effective Altruism is trying to find the most effective ways to perform altruism") and as a rhetorical device to deflect criticism ("Surely you can't object to trying to find the best ways of helping others?!"). Despite protests to the contrary EA is a form of utilitarianism, and when the word effective is used it has generally been in the sense of "cost effective". If you are not an effective altruist (which I am not), then cost effectiveness - while important - is an instrumental value rather than an intrinsic value. Depending on your ethical framework, therefore, what you define as "the best way" to help people will differ from the effective altruist.
2
Paul Currion
1y
p.s. I'm aware that Oxfam's programs are also currently decided by "somebody sitting in a comfortable office somewhere [who] has done some calculations", and I object to this as well while recognising that it may be inevitable given how the world works. My argument is that EA is no better than this current situation in principle, and may be worse than this *in practice* given that it could lead to the complete abandonment of entire countries.

Any chance we can have a google doc version to read/comment on?

1
OscarD
1y
I would also find this useful, the formatting makes me think it is made in LATEX though which would make that hard I think.

Actually, on reading the passage you quote Goldring again I think you have been uncharitable to him. The passage says 'Goldring says it would be wrong to apply the EA philosophy to all of Oxfam’s programmes because it could mean excluding people who most need the charity’s help.'

That could be read as expressing not the idea that more people in total get abandoned on EA views, which is indeed confused, but rather the (fairly philosophically mainstream!) prioritarian idea that all things being equal it is better to help people the worse off they currently ar... (read more)

2
Richard Y Chappell
1y
That doesn't fit well with his concern for "abandonment". It would imply instead that Prioritarian-Oxfam should instead pour all of their resources into South Sudan (abandoning Bangladeshi kids entirely).  But yeah, probably worth mentioning this explicitly! It's part of a more general lesson I'd like the paper to bring out, namely, that one can of course optimize for things other than prima facie utilitarian impact, but even so the results are going to look very different from the (thoroughly unoptimized) old-fashioned approaches to philanthropy.

I think the claim that your view doesn't license replaceability because it prioritizes currently existing people is a bit misleading. Unless the priority is infinite, there is presumably some level of well-being at which you swap (i.e. kill) all current people for a population with higher well-being at the same size. 'Oh, but not if they're just a little higher' doesn't seem that comforting. Of course, as you say in a footnote, you can appeal to side constraints here, but if you think side constraints can be overridden when the stakes are high enough (i.e.... (read more)

2
Richard Y Chappell
1y
Thanks, yeah that's definitely worth addressing. I was implicitly thinking that strict replaceability was the philosophically interesting/objectionable claim. The mere possibility of high-stakes swamping seems a bit more generic, and less distinctive to longtermism. E.g. neartermists may be equally committed to killing (or failing to save) one innocent in order to save a sufficiently large number of other, already-existing people. In general, not wanting to be sacrificed isn't a good reason to deny that others have value at all.  But yeah, worth mentioning this in the paper itself.
4
David Mathers
1y
My sense is that many people will think killing for replacement is distinctively objectionable, however many people are being added and however good their lives are, even though they accept that in extreme cases its okay to kill one to save very many who already exist. To capture that intuition, you need more than just that you should prioritize current people's lives a lot, the priority has to be infinite. 

Thanks for writing this! My sense from talking to non-EAs about longtermism is that most buy into asymmetric views of population ethics. I'm not sure what you say here will be very reassuring to them:

"Longtermism is a big tent, and includes room for “asymmetric” views of population ethics on which additional miserable lives are bad, but additional happy lives are not good but merely neutral. Such views still imply that we should be concerned about the risk of dystopian futures containing immense suffering (or “S-risks”). If there is a non-trivial chance of

... (read more)
4
David Mathers
1y
'that the level of suffering in the future could be much greater than the level of suffering at present' When you say "level" here, did you mean "amount"? If you think that people will suffer the same amount per person, or even less per person in the future, but also that there will be far more future people than current people, and you can improve things for a large fraction of the future people, you can still get the result that you will reduce suffering more by working on long-termist stuff than by working on present stuff. 
2
lilly
1y
Yes, I meant amount.
2
Richard Y Chappell
1y
Thanks, I appreciate the helpful suggestions!

"Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires."

I don't think the only alternative to wanting billionaires to actively try to do good is that you would be arguing for the obviously foolish idea that they should be trying to do less good. There might be many reasons you would not want to promote the ideas of billionaires 'doing more good'. E.g., you believe they have an inordinate amount of power and in ac... (read more)

2
Richard Y Chappell
1y
I think the full section addresses this (but let me know if you disagree), via the following: The general point (as stressed throughout the paper) being that we need to take total evidence into account. If there's evidence that "actively trying to do good they will ultimately do harm" then rationally doing good actually entails something different from what you're imagining when you describe them as "actively trying". EA principles would imply that we draw billionaires' attention to these risks, and encourage them to help in whatever ways are actually better in expectation.
1
Jamie Elsey
1y
Sure, I don't think what you're saying is technically incorrect it is just for me rhetorically, I would read you as being less sincere and therefore less convincing in engagement with critics if there seems to be some implication that comes across a bit like 'unless people believe something stupid, then their critiques don't make sense' - but this may also be a reaction to seeing only the excerpted quote and not the whole text

In this paper, I’ve argued that there are no good intellectual critiques of effective altruist principles. We should all agree that the latter are straightforwardly correct. But it’s always possible that true claims might be used to ill effect in the world. Many objections to effective altruism, such as the charge that it provides “moral cover” to the wealthy, may best be understood in these political terms.

I don’t think philosophers have any special expertise in adjudicating such empirical disagreements, so will not attempt to do so here. I’ll just note t

... (read more)
6
Richard Y Chappell
1y
Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EA's "utilitarian" foundations, so I'm not sure what's uncharitable about this? If they said something like, "EA has great principles, but we think the current orgs aren't doing a great job of implementing their own principles", that would be very different from what they actually say! (It would also mean I didn't need to address them in this paper, since I'm purely concerned with evaluating EA principles, not orgs etc.) But I guess it wouldn't hurt to flag the broader point that one could think current EA orgs are messing up in various ways while agreeing with the broader principles and wishing well for future iterations of EA (that better achieve their stated goals). Are there any other specific changes to my paper that you'd recommend here?

Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EA's "utilitarian" foundations

Yes, they're hostile to utilitarianism and to some extent agent-neutrality in general, but the account of "EA principles" you give earlier in the paper is much broader.

Effective altruism is sometimes confused with utilitarianism. It shares with utilitarianism the innocuous claim that, all else equal,it’s better to do more good than less. But EA does not entail utilitarianism’s more controversial claims. It does not entail hegemonic impartial maximizing: the EA project may just be one among many in a well-rounded life ...

I’ve elsewhere described the underlying philosophy of effective altruism as“beneficentrism”—or “utilitarianism minus the controversial bits”—that is, “the view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.” Utilitarians take beneficentrism to be exhaustive of fundamental morality, but others may freely combine it with other virtues, prerogatives, or pro tanto duties.

Critics like Crary and Srinivasan (and this particular virtue-ethicsy line of critique should not be conflated wi... (read more)

4
Brendan Mooney
1y
Haven't read the draft, just this comment thread, but it seems to me the quoted section is somewhat unclear and that clearing it up might reduce the commenter's concerns. You write here about interpreting some objections so that they become "empirical disagreements". But I don't see you saying exactly what the disagreement is. The claim explicitly stated is that "true claims might be used to ill effect in the world" -- but that's obviously not something you (or EAs generally) disagree with. Then you suggest that people on the anti-EA side of the disagreement are "discouraging people from trying to do good effectively," which may be a true description of their behavior, but may also be interpreted to include seemingly evil things that they wouldn't actually do (like opposing whatever political reforms they actually support, on the basis that they would help people too well). That's presumably a misinterpretation of what you've written, but that interpretation is facilitated by the fact that the disagreement at hand hasn't been explicitly articulated.

Hey Richard!

Big fan of Good Thoughts :)

I'd love to edit/help! Is there a rough date that you'd want edits by?

~ Saul Munn

3
Richard Y Chappell
1y
Hi Saul, any time this month would be great.  Thanks!
Curated and popular this week
Relevant opportunities