Richard Y Chappell

Associate Professor of Philosophy @ University of Miami
5110 karmaJoined
Interests:
Bioethics

Bio

Academic philosopher, co-editor of utilitarianism.net, blogs at https://rychappell.substack.com/

Comments
312

I think one could reasonably judge GiveWell-style saving and improving lives to constitute reliable global capacity growth, and (if very skeptical of attempts at more "direct", explicitly longtermist long-shots) think that this category of intervention is among our best longtermist options. I suggest something along these lines as a candidate EA "worldview" here.

I'd be curious to hear more about longtermist reasons to view GiveWell top charities as net-negative.

Yeah, that's interesting, but the argument "we should consider just letting people die, even when we could easily save them, because they eat too much chicken," is very much not what anti-EAs like Leif Wenar have in mind when they talk about GiveWell being "harmful"!

(Aside: have you heard anyone argue for domestic policies, like cuts to health care / insurance coverage, on the grounds that more human deaths would actually be a good thing? It seems to follow from the view you mention [not your view, I understand], but one doesn't hear that implication expressed so often.)

That seems reasonable to me! I'm most confident that the underlying principles of effective altruism are important and good, and you seem to agree on that. I agree there's plenty of room for people to disagree about speculative cause prioritization, and if you think the EA movement is getting things systematically wrong there then it makes sense to (in effect, not in these words) "do EA better" by just sticking with GiveWell or whatever you think is actually best.

Apologies for the delay! I've now re-posted the amalgamated full text to the two misdirection posts here, and the interlude on 'What EA means to me' here.

Hi Noah, since I drew the "potential rebuttal" to your attention, could you update your post with the link? Good citation practice :-)

Also, fwiw, I find the clickbaity title rather insulting. It's not really true that being willing to revise some commonsense moral assumptions in light of powerful arguments automatically makes one "bad at moral philosophy". It really depends on the strength of the arguments, and how counterintuitive it would be to reject those premises. Common sense is inconsistent, and the challenge of moral philosophy is to work out how best to resolve the conflicts. You can't do that without actually looking into the details.

Ok, so it sounds like your comparisons with GiveWell were an irrelevant distraction, given that you understand the point of "hits based giving". Instead, your real question is: "why not [hire] a cheap developer literally anywhere else?"

I'm guessing the literal answer to that question is that no such cheaper developer applied for funding in the same round with an equivalent project. But we might expand upon your question: should a fund like LTFF, rather than just picking from among the proposals that come to them, try taking some of the ideas from those proposals and finding different (perhaps cheaper) PIs to develop them?

It's possible that a more active role in developing promising longtermist projects would be a good use of their time. But I don't find it entirely obvious the way that you seem to. A few thoughts that immediately spring to mind:

(i) My sense of that time period was that finding grantmakers was itself a major bottleneck, and given that longtermism seemed more talent-constrained than money-constrained at that time, having key people spend more time just to save some money presumably would not have seemed a wise tradeoff.

(ii) A software developer that comes to you with an idea presumably has a deeper understanding of it, and so could be expected to do a better job of it, than an external contractor to whom you have to communicate the idea. (That is, external contractors increase risk of project failure due to miscommunication or misunderstanding.)

(iii) Depending on the details, e.g. how specific the idea is, taking an idea from someone's grant proposal to a cheaper PI might constitute intellectual theft. It certainly seems uncooperative / low-integrity, and not a good practice for grant-makers who want to encourage other high-skilled people with good ideas to apply to their fund!

To the downvoters: my understanding of negative karma is that it communicates "this comment is a negative epistemic contribution; its existence is bad for the discussion." I can't imagine that anyone of intellectual honesty seriously believes that of my comment. Please use 'disagree' votes to communicate disagreement.

[Edit to add: I don't really think people should be downvoting Matthew's comments either. It's a fine conversation to be having!]

I mean, there are pretty good theoretical reasons for thinking that anything that's genuinely positive for longtermism has higher EV than anything that isn't? Not really sure what's gained by calling the view "crass". (The wording may be, but you came up with the wording yourself!)

It sounds like you're just opposed to strong longtermism. Which is fine, many people are. But then it's weird to ask questions like, "Can't we all agree that GiveWell is better than very speculative longtermist stuff?" Like, no, obviously strong longtermists are not going to agree with that! Read the paper if you really don't understand why.

These grants have caused reputational harm to the movement, and that should have been easy to foresee. What has been the hit to fundraising for EA global health and animal welfare causes from the fallout from bad longtermism bets (FTX/SBF included)?

I really don't think it's fair to conflate speculative-but-inherently-innocent "bets" of this sort with SBF's fraud. The latter sort of norm-breaking is positively threatening to others - an outright moral violation, as commonly understood. But the "reputational harm" of simply doing things that seem weird or insufficiently well-motivated to others seems very different to me, and probably not worth going to extremes to avoid (or else you can't do anything that doesn't sufficiently appeal to normies).

Perhaps another way to put it is that even longtermists have obvious reasons to oppose SBF's fraud (my post that you linked to suggested that it was negative-EV for longtermist goals). But I think strong longtermists should generally feel perfectly comfortable defending speculative grants that are positive-EV and the only "risk" is that others don't judge them so positively. People are allowed to make different judgments (as long as they don't harm anyone). Let a thousand flowers bloom, and all that.

Insofar as your real message is, "Stop doing stuff that looks weird, even if it is perfectly defensible by longtermist lights, simply because I have neartermist values and disagree with it," then that just doesn't actually seem like a reasonable ask?

To answer your second question: I think it's in the nature of seeking "systemic change" that it depends upon speculative judgment-calls, rather than the sort of robust evidence one gets for global health interventions.

I don't think that "crafting a hypothetical" is enough. You need to exercise good judgment to put longtermism into practice. (This is a point I've previously made in response to Eric Schwitzgebel too.) Is any given attempt at longtermist outreach more likely to sway (enough) people positively or negatively? That's presumably what the grantmakers have to try to assess, on case-by-case basis. It's not like there's an algorithm they can use to determine the answer.

Insofar as you're assuming that nothing could possibly be worth doing unless supported by the robust evidence base of global health interventions, I think you're making precisely the mistake that the "systemic change" critics (mistakenly) accuse EA of.

The post-hoc rationalization is referring to the "Note that this grant was made at the very peak of the period of very abundant (partially FTX-driven) EA funding where finding good funding opportunities was extremely hard."

If it wasn't a good opportunity, why was it funded?

That doesn't sound like post-hoc rationalization to me. They're just providing info on how the funding bar has shifted. A mediocre opportunity could be worth funding when the bar is low (as long as the risks were also low).

In my comment, I wrote:

it seems prima facie reasonable to think both that (i) a computer game could reach a different audience from youtube videos, and (ii) raising awareness of key longtermist issues is a helpful first step for making broader progress on them.

This seems like the opposite of a "post-hoc rationalization"? I'm drawing on general principles that I apply similarly to any like case. I just think it's very hard to assess which speculative longtermist efforts are genuinely good bets or not, and even silly-sounding ones like a computer game could, given the stakes, be better in expectation than a more-certain but vastly lower-stakes win like those found in Global Health & Development. It really depends upon how promising an avenue it seemed for raising awareness of AI risk.

If you have a substantive argument against the principles I'm relying on, I'm all ears! But just calling them "rot" isn't particularly convincing. (It just makes me think that you don't understand where I, and others who think similarly, are coming from.)

Load more