Richard Y Chappell🔸

Associate Professor of Philosophy @ University of Miami
5410 karmaJoined
www.goodthoughts.blog/
Interests:
Bioethics

Bio

Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog

🔸10% Pledge #54 with GivingWhatWeCan.org

Comments
325

I like the hybrid approach, and discuss its implications for replaceability a bit here. (Shifting to the intrapersonal case: those of us who reject preference theories of well-being may still recognize reasons not to manipulate preferences, for example based on personal identity: the more you manipulate my values, the less the future person is me. To be a prudential benefit, then, the welfare gain has to outweigh the degree of identity loss. Moreover, it's plausible that extrinsic manipulations are typically more disruptive to one's degree of psychological continuity than voluntary or otherwise "natural" character development.)

It seems worth flagging that some instances of replacement seem clearly good! Possible examples include:

  • Generational turnover
  • not blindly marrying the first person you fall in love with
  • helping children to develop new interests

I guess even preference-affecting views will support instrumental replacement, i.e. where the new desire results in one's other desires being sufficiently better satisfied (even before counting any non-instrumental value from the new desire itself) to outweigh whatever was lost.

Norms = social expectations = psychological pressure. If you don't want any social pressure to take the 10% pledge (even among EAs), what you're saying is that you don't want it to be a norm.

Now, I don't think the pressure should be too intense or anything: some may well have good reasons for not taking the pledge. The pressure/encouragement from a username icon is pretty tame, as far as social pressures go. (Nobody is proposing a "walk of shame" where we all throw rotten fruit and denounce the non-pledgers in our midst!) But I think the optimal level of social pressure/norminess is non-zero, because I expect that most EAs on the margins would do better to take the pledge (that belief is precisely why I do want it to become more of a norm -- if I already trusted that the social environment was well-calibrated for optimal decisions here, we wouldn't need to change social norms).

So that's why I think it's good, on the Forum and elsewhere, to use the diamond to promote the 10% pledge.

To be clear:

(1) I don't think the audience "being familiar" with the pledge undercuts the reasons to want it to be more of a norm among EAs (and others).

(2) The possibility that something "might not be the right decision" for some people does not show that it shouldn't be a norm. You need to compare the risks of over-pledging (in the presence of a norm) to the risks of under-pledging (in the absence of a norm). I think we should be more worried about the latter. But if someone wants to make the comparative argument that the former is the greater risk, that would be interesting to hear!

I think that's kind of the whole point of Giving What We Can? It's trying to change social norms in a more generous direction, which requires public signaling from people who support (and follow) the proposed 10% norm. (Impact doesn't just come from sharing abstract info - as if anyone were strictly unaware that it would be possible for them to donate 10% - but also from social conformity, wanting to be more like people we like and respect, etc.) I think the diamond icon is great for this purpose.

Sometimes people use "virtue signal" in a derogatory sense, meaning a kind of insincere signal of pseudo-virtue: prioritizing looking good over doing good. But it doesn't have to be like that. Some norms are genuinely good -- I think this is one -- and signaling your support for those norms is a genuinely good thing!

Fair point - updated accordingly. (The core point remains.)

re: "being an actual cause", is there an easy way to bracket the (otherwise decisive-seeming) vainglory objection that MacAskill raises in DGB of the person who pushes a paramedic aside so that he can instead be the actual (albeit less competent) cause of saving a life?

we had several completely different vaccines ready within just a single year.

Possibly worth flagging: we had the Moderna vaccine within two days of genome sequencing - a month before the first confirmed COVID death in the US. a month or so. Waiting a whole year to release it to the public was a policy choice, not a scientific constraint. (Which is not to say that scaling up production would have been instant. Just that it could have been done a lot faster, if the policy will was there.)

My impressions: I was very struck by how intellectually incurious and closed-minded Alice Crary was about EA (thought this wasn't surprising given her written work on the topic). She would respond to Peter's points by saying things like, "That all sounds very reasonable, so you just must not really be an EA, as I use the term." I had the strong impression she'd never actually spoken to an EA before.

Her overarching framing took the form of a dilemma: either EA is incapable of considering any evidence beyond RCTs (this seemed to be her core definition of EA), or else there is nothing distinctive about EA. Her underlying reasoning, as emerged at a few points, was that EA doesn't tend to fund the (self-evidently good) social justice advocacy of her political allies. The only possible explanation is that EA is blinded by an RCT-obsessed methodology. (Extrapolating a bit from her written work: Demands for evidence constitute moral corruption because proper moral sensitivity lets you just see that her friends' work ought to be funded.) EA is grievously harmful (again, by definition), because it shifts attention and resources (incl. the moral passions of the smartest college students) away from social justice activists. As such, it ought to be "abolished".

In my question, I tried to press her on whether she saw any "moral risks" to her opposition to EA. (In particular, since less effectiveness-focus would predictably lead to fewer donations to anti-malarial charities, is she at all concerned that her advocacy could result in more children dying of malaria.) She offered a politician-style non-response, that in no way acknowledged that trade-offs are real, or that there could be any possible downsides to abolishing EA. I was not impressed.

Fortunately, Peter did a great job of pushing back against all this, clarifying that:

  • RCTs are great, but obviously not the only kind of evidence. EA is about evidence, not just about RCTs. (Some projects can be quite speculative. Peter stressed that expected value reasoning can be quite open to "moonshots".) Still, it is important to do followups and be guided by evidence of some sort because otherwise you risk overinvesting in debacles like Playpumps.
  • If there's evidence that justice-oriented groups are doing work that really does a lot of good, then he'd expect EA orgs to be open to assessing and funding that.
  • Before GiveWell came along, charities weren't really evaluated for effectiveness. Charity Navigator used financial metrics like overhead ratios which are entirely disconnected from what actual impact the charity's programs are having. Insofar as others are now starting to follow GiveWell's lead and consider effectiveness, EA deserves credit for that.

You might like my 'Nietzschean Challenge to Effective Altruism':

The upshot: I’ll argue that there’s some (limited) overlap between the practical recommendations of Effective Altruism (EA) and Nietzschean perfectionism, or what we might call Effective Aesthetics (EÆ). To the extent that you give Nietzschean perfectionism some credence, this may motivate (i) prioritizing global talent scouting over mere health interventions alone, (ii) giving less priority to purely suffering-focused causes, such as animal welfare, (iii) wariness towards traditional EA rhetoric that’s very dismissive of funding for art museums and opera houses, and (iv) greater support for longtermism, but with a strong emphasis on futures that continue to build human capacities and excellences, and concern to avoid hedonistic traps like “wireheading”.

P.S. I think you mean to talk about 'ethical theory'. 'Metaethics' is a different philosophical subfield entirely.

To be clear, I'm all in favor of aiming higher! Just suggesting that you needn't feel bad about yourself if/when you fall short of those more ambitious goals (in part, for the epistemic benefits of being more willing to admit when this is so).

I agree with all this. If any Forum moderators are reading this, perhaps they could share instructions for how to update our display names? (Bizarrely, I can't find any way to do this when I go to edit my profile.)

Load more