All of Richard Y Chappell's Comments + Replies

Here's how I picture the axiological anti-realist's internal monologue: 

"The point of liberal intuitions is to prevent one person from imposing their beliefs on others. I care about my axiological views, but, since I have these liberal intuitions, I do not feel compelled to impose my views on others. There's no tension here."

By contrast, here's how I picture the axiological realist:

"I have these liberal intuitions that make me uncomfortable with the thought of imposing my views on others. At the same time, I know what the objectively correct axiology

... (read more)

Thanks for writing this! I find it really striking how academic critics of longtermism (both Thorstad and Schwitzgebel spring to mind here) don't adequately consider model uncertainty. It's something I also tried to flag in my old post on 'X-risk agnosticism'.

Tarsney's epistemic challenge paper is so much better, precisely because he gets into higher-order uncertainty (over possible values for the crucial parameter "r" which includes the persisting risk of extinction, in the far future, despite our best efforts).

3
Matthew Rendall
2d
Thanks, Richard! I've just had a look at your post and see you've anticipated a number of the points I made here. I'm interested in the problem of model uncertainty, but most of the treatments of it I've found have been technical, which isn't much help to a maths illiterate like me. Some of the literature on moral uncertainty is relevant, and there’s an interesting treatment in Toby Ord’s, Rafaela Hillerbrand’s and Anders Sandberg’s paper here. But I’d be glad to learn of other philosophical treatments if you or others can recommend any.

In general (whether realist or anti-realist), there is "no clear link" between axiological certainty and oppressive behavior, precisely because there are further practical norms (e.g. respect for rights, whether instrumentally or non-instrumentally grounded) that mediate between evaluation and action.

You suggest that it "seems only intuitive/natural" that an anti-realist should avoid being "too politically certain that what they believe is what everyone ought to believe." I'm glad to hear that you're naturally drawn to liberal tolerance. But many human bei... (read more)

4
Wei Dai
2d
I of course also think that philosophical progress, done right, is a good thing. However I also think genuine philosophical progress is much harder than it looks (see Some Thoughts on Metaphilosophy for some relevant background views), and therefore am perhaps more worried than most about philosophical "progress", done wrong, being a bad thing.

I agree with what you say in the last paragraph, including the highlighting of autonomy/placing value on it (whether in a realist or anti-realist way).

I'm not convinced by what you said about the effects of belief in realism vs anti-realism.

If you hold fixed people's first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.

Sure, but that feels like it's begging the question.

Let's grant that the people we're comparing already have liberal intuitions. After all, this discussion started in a ... (read more)

We just wrote a textbook on the topic together (the print edition of utilitarianism.net)! In the preface, we briefly relate our different attitudes here: basically, I'm much more confident in the consequentialism part, but sympathetic to various departures from utilitarian (and esp. hedonistic) value theory, whereas Will gives more weight to non-consequentialist alternatives (more for reasons of peer disagreement than any intrinsic credibility, it seems), but is more confident that classical hedonistic utilitarianism is the best form of consequentialism.

I agree it'd be fun for us to explore the disagreement further sometime!

This is really sad news. I hope everyone working there has alternative employment opportunities (far from a given in academia!).

I was shocked to hear that the philosophy department imposed a freeze on fundraising in 2020. That sounds extremely unusual, and I hope we eventually learn more about the reasons behind this extraordinary institutional hostility. (Did the university shoot itself in the financial foot for reasons of "academic politics"?)

A minor note on the forward-looking advice: "short-term renewable contracts" can have their place, especially for... (read more)

On your second point, FHI had at least ~£10m sitting in the bank in 2020 (see below, from the report). So the fundraising freeze, while unusual, wasn't terminal. A rephrasing of your question is "What adminstrative and organisational problems at FHI could possibly have prompted the Faculty to take the unusual step of a hiring and fundraising freeze in 2020, and why could it not be resolved over the next two to three years?"

"Open Philanthropy became FHI’s most important funder, making two major grants: £1.6m in 2017, and £13.3m in 2018. Indeed, the donation

... (read more)

I don't necessarily disagree with any of that, but the fact that you asserted it implicates you think it has some kind of practical relevance which is where I might want to disagree.

I think it's fundamentally dishonest (a kind of naive instrumentalism in its own right) to try to discourage people from having true beliefs because of faint fears that these beliefs might correlate with bad behavior.

I also think it's bad for people to engage in "moral profiling" (cf. racial profiling), spreading suspicion about utilitarians in general based on very speculative... (read more)

5
David Mathers
7d
Actually, I have a lot of sympathy with what you are saying here. I am ultimately somewhat inclined to endorse "in principle, the ends justify the means, just not in practice" over at least a fairly wide range of cases. I (probably) think in theory you should usually kill one innocent person to save five, even though in practice anything that looks like doing that is almost certainly a bad idea, outside artificial philosophical thought experiments and maybe some weird but not too implausible scenarios involving war or natural disaster. But at the same time, I do worry a bit about bad effects from utilitarianism because I worry about bad effects from anything. I don't worry too much, but that's because I think those effects are small, and anyway there will be good effects of utilitarianism too. But I don't think utilitarians should be able to react with outrage when people say plausible things about the consequences of utilitarianism. And I think people who worry about this more than I do on this forum are generally acting in good faith. And yeah, I agree utilitarians shouldn't (in any normal context) lie about their opinions. 

fwiw, I wouldn't generally expect "high confidence in utilitarianism" per se to be any cause for concern. (I have high confidence in something close to utilitarianism -- in particular, I have near-zero credence in deontology -- but I can't imagine that anyone who really knows how I think about ethics would find this the least bit practically concerning.)

Note that Will does say a bit in the interview about why he doesn't view SBF's utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from... (read more)

Note that Will does say a bit in the interview about why he doesn't view SBF's utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).

I disagree with Will a bit here, and think that SBF's utilitarian beliefs probably did contribute significantly to what happened, but perhaps somewhat indirectly, by 1) giving him large... (read more)

3
David Mathers
7d
I don't necessarily disagree with most of that, but I think it is ultimately still plausible that people who endorse a theory that obviously says in principle bad ends can justify the means are somewhat (plausibly not very much though) more likely to actually do bad things with an ends-justifies-the-means vibe. Note that this is an empirical claim about what sort of behaviour is actually more likely to co-occur with endorsing utilitarianism or consequentialism in actual human beings. So it's not refuted by "the correct understanding of consequentialism mostly bars  things with an ends justifies the means vibe in practice" or "actually, any sane view allows that sometimes it's permissible to do very harmful things to prevent a many orders of magnitude greater harm". And by "somewhat plausible" I mean just that. I wouldn't be THAT shocked to discover this was false, my credence is like 95% maybe? (1 in 20 things happen all the time.)  And the claim is correlational, not causal (maybe both endorsement of utilitarianism and ends-justifies-the-means type behaviour are both caused partly by prior intuitive endorsement of ends-justifies-the-means type behaviour, and adopting utilitarianism doesn't actually make any difference, although I doubt that is entirely true.) 

Yes, I agree it seems important to have marketers and PR people to craft persuasive messaging for mass audiences. That's not what I'm trying to do here, and nor do I think it would make any sense for me to shift into PR -- it wouldn't be a good personal fit. My target audience is academics and "academic-adjacent" audiences, and as a philosopher my goal is to make clear what's philosophically justified, not to manipulate anyone through non-rational means. I think this is an important role, for reasons explained in some of the footnotes to my posts there. But I also agree it's not the only important role, and it would plausibly be good for EA to additionally have more mass-market appeal.  It takes all sorts.

fyi, I weakly downvoted this because (i) you seem like you're trying to pick a fight and I don't think it's productive; there are familiar social ratcheting effects that incentivize exaggerated rhetoric on race and gender online, and I don't think we should encourage that. (There was nothing in my comment that invited this response.) (ii) I think you're misrepresenting Trace. (iii) The "expand your moral circle" comment implies, falsely, that the only reason one could have for tolerating someone with bad views is that you don't care about those harmed by t... (read more)

4
Yarrow B.
14d
Now I wonder if you’re actually familiar with Hanania’s white supremacist views? (See here, for example.) 

I'd just like to clarify that my blogroll should not be taken as a list of "worthy figure[s] who [are] friend[s] of EA"!  They're just blogs I find often interesting and worth reading. No broader moral endorsement implied!

fwiw, I found TracingWoodgrains' thoughts here fairly compelling.

ETA, specifically:

I have little patience with polite society, its inconsistencies in which views are and are not acceptable, and its games of tug-of-war with the Overton Window. My own standards are strict and idiosyncratic. If I held everyone to them, I'd live in a lon

... (read more)
9
Yarrow B.
14d
I find it so maddeningly short-sighted to praise a white supremacist for being "respectful". White supremacists are not respectful to non-white people! Expand your moral circle! A recurring problem I find with replies to criticism of associating with white supremacist figures like Hanania is a complete failure to empathize with or understand (or perhaps to care?) why people are so bothered by white supremacy. Implied in white supremacy is the threat of violence against non-white people. Dehumanizing language is intimately tied to physical violence against the people being dehumanized. White supremacist discourse is not merely part of some kind of entertaining parlour room conversation. It’s a bullet in a gun.

Thanks, that's very helpful!  I do want my points to be forceful, but I take your point that overdoing it can be counterproductive.  I've now slightly moderated that sentence to instead read, "Wenar is here promoting a general approach to practical reasoning that is systematically biased (and predictably harmful as a result): a plain force for ill in the world."

Right, that's why I also take care to emphasize that responsible criticism is (pretty much) always possible, and describe in some detail how one can safely criticize "Good Things" without being susceptible to charges of moral misdirection.

Thanks, that's helpful feedback. I guess I was too focused on making it concise, rather than easily understood.

1
yanni kyriacos
21d
no problemo!

This is an important point. People often confuse harm/benefit asymmetries with doing/allowing asymmetries. Wenar's criticism seems to rest on the latter, not the former. Note that if all indirect harms are counted within the constraint against causing harm, almost all actions would be prohibited. (And on any plausible restriction, e.g. to "direct harms", it would no longer be true that charities do harm. Wenar's concerns involve very indirect effects. I think it's very unlikely that there's any consistent and plausible way to count these as having dispropo... (read more)

I found it a bit hard to discern what constructive points he was trying to make amidst all the snark. But the following seemed like a key passage in the overall argument:

Making responsible choices, I came to realize, means accepting well-known risks of harm. Which absolutely does not mean that “aid doesn’t work.” There are many good people in aid working hard on the ground, often making tough calls as they weigh benefits and costs. Giving money to aid can be admirable too—doctors, after all, still prescribe drugs with known side effects. Yet what no one in

... (read more)

I was disappointed GiveDirectly wasn't mentioned given that seems to be more what he would favour. The closing anecdote about the surfer-philosopher donating money to Bali seems like a proto-GiveDirectly approach but presumably a lot less efficient without the infrastructure to do it at scale.

There was meant to be an "all else equal" clause in there (as usually goes without saying in these sorts of thought experiments) -- otherwise, as you say, the verdict wouldn't necessarily indicate underlying non-utilitarian concerns at all.

Perhaps easiest to imagine if you modify the thought experiment so that your psychology (memories, "moral muscles", etc.) will be "reset" after making the decision. I'm talking about those who would insist that you still ought to save the one over the two even then -- no matter how the purely utilitarian considerations play out.

1
Sam Battis
1mo
Yeah honestly I don't think there is a single true deontologist on Earth. To say anything is good or addresses the good, including deontology, one must define the "good" aimed at. I think personal/direct situations entail a slew of complicating factors that a utilitarian should consider. As a response to that uncertainty, it is often rational to lean on intuition. And, thus, it is bad to undermine that intuition habitually. "Directness" inherently means higher level of physical/emotional involvement, different (likely closer to home) social landscape and stakes, etc. So constructing an "all else being equal" scenario is impossible. Related to initial deontologist point: when your average person expresses a "directness matters" view, it is very likely they are expressing concern for these considerations, rather than actually having a diehard deontologist view (even if they use language that suggests that).

It's fine to offer recommendations within suboptimal cause areas for ineffective donors. But I'm talking about worldview diversification for the purpose of allocating one's own (or OpenPhil's own) resources genuinely wisely, given one's (or: OP's) warranted uncertainty.

It's always better for a view to be justified than to be unjustified? (Makes it more likely to be true, more likely to be what you would accept on further / idealized reflection, etc.)

The vast majority of worldviews do not warrant our assent. Worldview diversification is a way of dealing with the sense that there is more than one that is plausibly well-justified, and warrants our taking it "into account" in our prioritization decisions. But there should not be any temptation to extend this to every possible worldview. (At the limit: some are outright bad or evil. More moderately: others simply have very little going for them, and would not be worth the opportunity costs.)

1
Halffull
1mo
This just seems like you're taking on one specific worldview and holding every other worldview up to it to see how it compares. Of course this is an inherent problem with worldview diversification, how to define what counts as a worldview and how to choose between them. But still intuitively if your meta-wolrdview screens out the vast majority of real life views that seems undesirable. The meta-worldview that coherency matters is impotant but should be balanced with other meta worldviews, such as that what matters is how many people hold a worldview, or how much harmony it creates

I was replying to your sentence, "I'd guess most proponents of GHD would find (1) and (2) particularly bad."

4
Rohin Shah
1mo
Oh I see, sorry for misinterpreting you.

I don't really know enough about the empirics to add much beyond the possible "implications" flagged at the end of the post. Maybe the clearest implication is just the need for further research into flow-through effects, to better identify which interventions are most promising by the lights of reliable global capacity growth (since that seems a question that has been unduly neglected to date).

Thanks for flagging the "sandboxing" argument against AW swamping of GHD. I guess a lot depends there on how uncertain the case for AW effectiveness is. (I didn't ha... (read more)

Thanks, yeah, these are important possibilities to consider!

Thanks, I agree that those are possible arguments for the opposing view. I disagree that anyone needs to "prove" their position before believing it. It's quite possible to have justified positive credence in a proposition even if it cannot be decisively proven (as most, indeed, cannot). Every possible position here involves highly contestable judgment calls. Certainly nothing that you've linked to proves that human life is guaranteed to be net-negative, but you're still entitled to (tentatively) hold to such pessimism if your best judgment supports that conclusion. Likewise for my optimism.

1
Corentin Biteau
1mo
Yes, "prove" is too strong here, that's not the term I should have used. And human life is not guaranteed to be net-negative.  But I often see the view that some people assume human action in the future to be net-positive, and I felt like adding a counterpoint to that, given the large uncertainties.

I'm curious why you think Singer would agree that "the imperative to save the child's life wasn't in danger of being swamped by the welfare impact on a very large number of aquatic animals." The original thought-experiment didn't introduce the possibility of any such trade-off. But if you were to introduce this, Singer is clearly committed to thinking that the reason to save the child (however strong it is in isolation) could be outweighed.

Maybe I'm misunderstanding what you have in mind, but I'm not really seeing any principled basis for treating "saving ... (read more)

3
David T
1mo
Because as soon as you start thinking the value of saving or not saving life is [solely] instrumental in terms of suffering/output tradeoffs, the basic premise of his argument (childrens' lives are approximately equal, no matter where they are) collapses. And the rest of Singer's actions also seem to indicate that he didn't and doesn't believe that saving sentient lives is in danger of being swamped by cost-effective modest suffering reduction for much larger numbers of creatures whose degree of sentience he also values. The other reason why I've picked up there being no quantification of any value to human lives is you've called your bucket "pure suffering reduction", not "improving quality of life", so it's explicitly not framed as a comprehensive measure of welfare benefit to the beneficiary (whose death ceases their suffering). The individual welfare upside to survival is absent from your framing, even if it wasn't from your thinking. If we look at broader measures like hedonic enjoyment or preference satisfaction, I think its much easier for humans to dominate. Relative similarity of how humans and animals experience pain isn't necessarily matched by how they experience satisfaction. So any conservative framing for the purpose of worldview diversification and interspecies tradeoffs involves separate "buckets" for positive and negative valences (which people are free to combine if they actually are happy with the assumption of hedonic utility and valence symmetry). And yes, I'd also have a separate bucket for "saving lives", which again people are free to attach no additional weight to, and to selectively include and exclude different sets of creatures from.  This means that somebody can prioritise pain relief for 1000 chickens over pain relief for 1 elderly human, but still pick the human when it comes down to whose live(s) to save, which seems well within the bounds of reasonable belief, and similar to what a number of people who've thought very carefully

Yeah, I don't think most people's motivating reasons correspond to anything very coherent. E.g. most will say it's wrong to let the child before your eyes drown even if saving them prevents you from donating enough to save two other children from drowning. They'd say the imperative to save one child's life isn't in danger of being swamped by the welfare impact on other children, even. If anyone can make a coherent view out of that, I'll be interested to see the results. But I'm skeptical; so here I restricted myself to views that I think are genuinely well-justified. (Others may, of course, judge matters differently!)

3
Sam Battis
1mo
2min coherent view there: the likely flowthrough of not saving a child right in front of you to your psychological wellbeing, community, and future social functioning, especially compared to the counterfactual, are drastically worse than not donating enough to save two children on average, and the powerful intuition one could expect to feel in such a situation, saying that you should save the child, is so strong that to numb or ignore it is likely to damage the strength of that moral intuition or compass, which could be wildly imprudent. In essence: -psychological and flow-through effects of helping those in proximity to you are likely undervalued in extreme situations where you are the only one capable of mitigating the problem -effects of community flow-through effects in developed countries regarding altruistic social acts in general may be undervalued, especially if they uniquely foster one's own well-being or moral character through exercise of a "moral muscle" -it is imprudent to ignore strong moral intuition, especially in emergency scenarios, and it is important to Make a Habit of not ignoring strong intuition (unless further reflection leads to the natural modification/dissipation of that intuition) To me, naive application of utilitarianism often leads to underestimating these considerations.
5
huw
1mo
Coherence may not even matter that much, I presume that one of Open Philanthropy's goals in the worldview framework is to have neat buckets for potential donors to back depending on their own feelings. I also reckon that even if they don't personally have incoherent beliefs, attracting the donations of those that do is probably more advantageous than rejecting them.
3
David T
1mo
I agree that a lot of people's motivating reasons don't correspond to anything particularly coherent, but that's why I highlighted that even the philosopher who conceived the original thought experiment specifically to argue the being in front of you component didn't matter, (who happens to be an outspoken anti-speciesist hedonic utilitarian) appears to have concluded that [human] lifesaving is intrinsically valuable, and to the point the approximate equivalence of the value of lives saved swamped considerations about relative suffering or capabilities. Ultimately the point was less about the quirks of thought experiments and more that "saving lives" is for many people a different bucket with different weights from "ending suffering" and only marginal overlap with "capacity growth". And a corollary of that is that they can attach a reasonably high value to the suffering of an individual chicken and still think saving a life [of a human] is equal to or more valuable than equivalent spend on activism that might reduce suffering of a relatively large number chickens - it's a different 'bucket' altogether. (FWIW I think most people find a scenario in which it's necessary to allow the child to drown to raise enough money to save two children implausible; and perhaps substitute a more plausible equivalent where the person makes a one off donation to an effective medical charity as a form of moral licensing for letting the one child drown... )

Thanks! I should clarify that I'm trying to offer a principled account that can yield certain verdicts that happen to align with commonsense. But I'm absolutely not trying to capture common-sense reasoning or ideas (I think those tend to be hopelessly incoherent).

So yes, my framework assumes that long-run effects matter. (I don't think there's any reasonable basis for preferring GHD over AW if you limit yourself to nearterm effects.) But it allows that there are epistemic challenges to narrowly targeted attempts to improve the future (i.e. the traditional ... (read more)

I don't have anything as precise as a definition, but something in the general vicinity of [direct effects on individual welfare + indirect effects on total productivity, which can be expected to improve future welfare].

It's not a priori that GiveWell is justified by any reasonable EA framework. It is, and should be, open to dispute. So I see the debate as a feature, not a bug, of my framework. (I personally do think it is justified, and try to offer a rough indication of why, here. But I could be wrong on the empirics. If so, I would accept the implicatio... (read more)

Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve "global capacity", and that a better way would be to pursue more controversial/speculative causes like population growth or long-run economic growth or whatever. I just don't see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuf... (read more)

I think it may be important to draw a theory/practice distinction here. It seems completely undeniable in theory (or in terms of what is fundamentally preferable) that instrumental value matters, and so we should prefer that more productive lives be saved (otherwise you are implicitly saying to those who would be helped downstream that they don't matter). But we may not trust real-life agents to exercise good judgment here, or we may worry that the attempts would reinforce harmful biases, and so the mere attempt to optimize here could be expected to do mor... (read more)

I'm suggesting that they should change their honest beliefs. They're at liberty to burn their money too, if they want. But the rest of us are free to try to convince them that they could do better. This is my attempt.

One could argue that with somewhat fewer kids, the society could provide better nutrition, education, health care, and other inputs that are rather important to adult capacity and flourishing. 

I think that's an argument worth having. After all, if the claim were true then I think that really would justify shifting attention away from infant mortality reduction and towards these "other inputs" for promoting human flourishing. (But I'm skeptical that the claim is true, at least on currently relevant margins in most places.)

Oops, definitely didn't mean any derogation -- I'm a big fan of moonshots, er, speculative high-uncertainty (but high EV) opportunities! [Update: I've renamed them to 'High-impact long-shots'.]

I disagree on "capacity growth" through: that one actually has descriptive content, which "common-sense global interventions" lacks. (They are interventions to achieve what, exactly?)

2
RedStateBlueState
1mo
How are you defining global capacity, then? This is currently being argued in other replies better than I can, but I think there’s a good chance that the most reasonable definition implies optimal actions very different than GiveWell. Although I could be wrong. I don’t really think the important part is the metric - the important part is that we’re aiming for interventions that agree with common sense and don’t require accepting controversial philosophical positions (beyond rejecting pro-local bias I guess)

I'm more interested in (1), but how we justify that could have implications for (2).

I guess I have (i) some different empirical assumptions, and (ii) some different moral assumptions (about what counts as a sufficiently modest revision to still count as "conservative", i.e. within the general spirit of GHD).

To specifically address your three examples:

  1. I'd guess that variance in cost (to save one life, or whatever) outweighs the variance in predictable ability to contribute. (iirc, Nick Beckstead's dissertation on longtermism made the point that all else equal, it would be better to save a life in a wealthy country for instrumental reasons,
... (read more)

So I'm not really seeing anything "bad" here.

I didn't say your proposal was "bad", I said it wasn't "conservative".

My point is just that, if GHD were to reorient around "reliable global capacity growth", it would look very different, to the point where I think your proposal is better described as "stop GHD work, and instead do reliable global capacity growth work", rather than the current framing of "let's reconceptualize the existing bucket of work".

9
Jason
1mo
This sounds plausible, but not obvious, to me. If your society has a sharply limited amount of resources to invest in the next generation, it isn't clear to me that maximizing the number of members in that generation would be the best "way to improve human capacity" in that society. One could argue that with somewhat fewer kids, the society could provide better nutrition, education, health care, and other inputs that are rather important to adult capacity and flourishing.  To be clear, I am a strong supporter of life-saving interventions and am not advocating for a move away from these. I just think they are harder to justify on improving-capacity grounds than on the grounds usually provided for them.

To clarify: I'm definitely not recommending "shunning" anyone. I agree it makes perfect sense to continue to refer to particular cause areas (e.g. "global health & development") by their descriptive names, and anyone may choose to support them for whatever reasons.

I'm specifically addressing the question of how Open Philanthropy (or other big funders) should think about "Worldview Diversification" for purposes of having separate funding "buckets" for different clusters of EA cause areas.

This task does require taking some sort of stand on what "worldviews" are sufficiently warranted to be worth funding, with real money that could have otherwise been used elsewhere.

Especially for a dominant funder like OP, I think there is great value in legibly communicating its honest beliefs. Based on what it has been funding in GH&D, at least historically, it places great value on saving lives as ~an end unto itself, not as a means of improving long-term human capacity. My understanding is that its usual evaluation metrics in GH&D have reflected that (and historic heavy dependence on GiveWell is clearly based on that). Coming up with some sort of alternative rationale that isn't the actual rationale doesn't feel honest, t... (read more)

Two main thoughts:

(1) If building human capacity has positive long-term ripple effects (e.g. on economic growth), these could be expected to swamp any temporary negative externalities.

(2) It's also not clear that increasing population increases meat-eating in equilibrium. Presumably at some point in our technological development, the harms of factory-farming will be alleviated (e.g. by the development of affordable clean meat). Adding more people to the current generation moves forward both meat eating and economic & technological development. It doesn... (read more)

8
Corentin Biteau
1mo
About point 1, you'd first need to prove that the expected value of the future is going to be positive, something which does not sound guaranteed, especially if factory farming were to continue in the future. Regarding point 2, note that clean meat automatically winning in the long term is really not guaranteed either: I recommend reading that post, Optimistic longtermist would be terrible for animals.

Hi Nick, I'm reacting especially to the influential post, Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, which seems to me to frame the issues in the ways I describe here as "orthodox". (But fair point that many supporters of GHD would reject that framing! I'm with you on that; I'm just suggesting that we need to do a better job of elucidating an alternative framing of the crucial questions.)

I currently think the experience of being human might be many orders of magnitude more valuable than any other animal (I reject hedonism)

Thanks,... (read more)

Hi! Yeah, as per footnote 3, I think the "reliable capacity growth" bucket could end up being more expansive than just GHD. (Which is to say: it seems that reasons of principle would favor comparing these various charities against each other, insofar as we're able.) But I don't have a view on precisely where to draw the line for what counts as "reliable" vs "speculative".

Whether causes like FF and SM belong in the "reliable capacity growth" or "pure suffering reduction" buckets depends on whether their beneficiaries can be expected to be more productive. I... (read more)

Thanks, this has been a helpful discussion.

I agree that most GHD donors don't consciously conceive of things as I've suggested. But I think the most coherent idealization of their preferences would lead in the direction I'm suggesting. It's even possible that they are subconsciously (and imperfectly) tracking something like my suggestion. It would be interesting to see whether most accept or reject the idea that fetal anesthesia or (say) elder care are "relevantly similar" to saving children.  Since metrics like QALYs (esp. for young people) and incom... (read more)

I think the comparison group should be based on principle, rather than pragmatic considerations of which bucket you'd rather divert funds from!

If it's true that GHD funds should be diverted to AW funds, then they should be diverted to AW funds, not to a very poor substitute for an AW cause.

I personally think it isn't obvious that GHD funds should be so diverted, precisely because of their greater potential for flow-through effects. But of course if that is the basis for GHD funding having a lower "bar" than AW funding, it cannot justify applying the low (G... (read more)

8
Ariel Simnegar
1mo
It might be that the strongest reason to prioritize GHD is because of flow-through effects, as you've suggested. But I don't think that those who prioritize GHD generally actually do so for that reason. They care about saving and improving people's lives in the near term, and the units they use (QALYs, income doublings, WELLBYs) and stories they tell (the drowning child) reflect that. If GHD was trying to optimize for robustly increasing long-term human capacity, I think the GHD portfolio of interventions would look very different. It might include certain longtermist cause areas such as improving institutional decisionmaking. It would be surprising if the best interventions when optimizing for longterm flow-through effects were also the best when optimizing for immediate effects on individuals. If you're optimizing for flow-through effects, I agree that it's non-obvious whether GHD or AW is better, but I think you probably shouldn't be donating to either of those! I think GHD donors choose GHD over AW simply because they care overwhelmingly more about humans than nonhuman animals. That's also why they usually ignore animal effects in their cost-effectiveness analyses, even though those effects would swamp the effects on humans for many GHD interventions. If they were trying to impartially help others in the near term, they would choose AW. Here's a classification of GHD/AW which I think is more relevant to neartermists' revealed preferences: The best impartial neartermist interventions are AW. The best neartermist interventions ignoring nonhuman animals are GHD. Under that classification, fetal welfare would be GHD.

Thanks for this. I find it very strange that fetal anesthesia isn't standard here: unless there's some countervailing medical reason (risk to the mother?) or very significant expense involved, it seems like a clear moral improvement.

...see whether advocacy for fetal anesthesia is cost-effective enough to be competitive with leading global health interventions.

fwiw, I think a better comparison would be leading animal welfare interventions. Those seem more similarly targeted at raw suffering-reduction, whereas most "global health interventions" serve to incr... (read more)

I very much agree that it's a clear moral improvement unless there's some strong countervailing consideration. I would guess the greatest practical difficulty would be the intervention's adjacency to politically contentious issues, which might make it intractable.

fwiw, I think a better comparison would be leading animal welfare interventions

I agree that there are many similarities between this proposal and animal welfare interventions. However, since I think the best animal welfare interventions are orders of magnitude more effective than GHD, I'd far rath... (read more)

People often feel an obligation not to delay after they've received funding

Thanks for flagging this! As a purely forward-looking matter (not blaming anyone), I'd now like to explicitly push back against any such norm. For comparison: it's standard in academia for grant-funded projects to begin the following academic year after grant funding is received (so, often 6 months or more).

This delay is necessary because it's not feasible for universities to drop a planned class at the last minute, after students have already enrolled in it. But independent contrac... (read more)

I think that would be a big step forward- and it might not even be a change in policy, just something that needs to be said more explicitly. 

I don't think it solves the entire problem, but at a certain point I just need to write my Why Living On Personal Grants Sucks post. 

This is a very unfortunate situation, but as a general piece of life advice for anyone reading this: expressions of interest are not commitments and should not be "interpreted" -- let alone acted upon! -- as such.

For example, within academia, a department might express interest in having Prof X join their department. But there's no guarantee it will work out. And if Prof. X prematurely quit their existing job, before having a new contract in hand, they would be taking a massive career risk!

(I'm not making any comment on the broader issues raised here; I sy... (read more)

-1
Igor Ivanov
2mo
This is more of communication issue. Any misunderstanding on my part could be resolved very quickly with proper communication from the Fund side.  Also, while applying for grant, I outlined that I expect to start project shortly after the the deadline for getting an answer.

On one hand, I agree with you that expressions of interest or even intent are different than commitments, and commitments are different from money in hand. I wish we had exact quotes to figure out what interpretations were justified, but it's certainly possible Caleb's communication was precise and Igor read too much into it. 

OTOH, there is an embedded problem here. If the grant were approved, it would be unethical to drop patients in favor of EAs. Igor's choices were to behave unethically, stop taking new clients before the grant was approved, or del... (read more)

Thanks! But to clarify, what I'm wondering is: why take unrealized probabilities to create ex post complaints at all? On an alternative conception, you have an ex post complaint if something bad actually happens to you, and not otherwise.

(I'm guessing it's because it would mean that we cannot know what ex post complaints people have until literally after the fact, whereas you're wanting a form of "ex post" contractualism that is still capable of being action-guiding -- is that right?)

4
emmajcurran
2mo
Your guess is precisely right. Ex-post evaluations have really developed as an alternative to ex-ante approaches to decision-making  under risk. Waiting until the outcome realises does not help us make decisions. Thinking about how we can justify ourselves depending on the various outcomes we know could realise does help us.  The name can definitely be misleading, I see how it can pull people into debates about retrospective claims and objective/subjective permissibility.    Sorry I edited this as I had another thought.

"50 people wouldn’t actually die if we don’t choose the AI research, instead, 100 million people would face a 0.00005% chance of death."

I'm a bit puzzled by talk of probabilities ex post.  Either 100 million people die or zero do. Shouldn't the ex post verdict instead just depend on which outcome actually results?

(I guess the "ex post" view here is really about antecedently predictable ex post outcomes, or something along those lines, but there seems something a bit unstable about this intermediate perspective.)

4
emmajcurran
2mo
"50 people wouldn’t actually die if we don’t choose the AI research, instead, 100 million people would face a 0.00005% chance of death." I think, perhaps, this line is infelicitous.  The point is that all 100 million people have an ex-post complaint, as there is a possible outcome in which all 100 million people die (if we don't intervene). However, these complaints need to be discounted by the improbability of their occurrence.  To see why we discount, imagine we could save someone from a horrid migraine, but doing so creates a 1/100 billion chance some random bystander would die. If we don't discount ex-post, then ex-post we are comparing a migraine to death - and we'd be counterintuitively advised not to alleviate the migraine. Once you discount the 100 million complaints, you end up with 100 million complaints of death, each discounted by  99.99995%.  I hope this clears up the confusion, and maybe helps with your concerns about instability? 

Alice, Charles and Mike cooperate in this charity. The participation of all is indispensable for the outcome. So they each have a counterfactual impact on 1 animal.

If each of them were to assume to have offset one previous animal product consumption of theirs through this project, that would be triple counting. For this reason counterfactual values of donations shouldn't be used in offsetting calculations.

I'm not sure about this. Suppose that C & M are both committed to offsetting their past consumption, and also that both will count the present co-ope... (read more)

Great piece. The reflections on how movements look from the outside vs from the inside seemed very insightful.

I also liked this point about applied moral philosophy: "there are many situations in which utilitarianism guides my thinking, especially as a philanthropist, but uncertainty still leaves me with many situations where it doesn’t have much to offer. In practice, I find that I live my day to day deferring to side constraints using something more like virtue ethics. Similarly, I abide by the law, rather than decide on a case by case basis whether breaking the law would lead to a better outcome. Utilitarianism offers an intellectual North Star, but deontological duties necessarily shape how we walk the path."

If you're worried that a real-life FMF would not be truly symmetrical to AMF in its effects, just mentally replace it with "Minus AMF" in my original comment. (Or imagine stipulating away any such differences.)  It doesn't affect the essential point.

Thanks for explaining!

It is a fair comparison. Andreas' relevant claim is that it isn't clear what the sign of the effect from AMF is. If AMF is negative, then its opposite--FMF--would presumably be positive.

3
Vasco Grilo
3mo
Thanks for following up! I am not sure about this. I think Andreas' claim is that AMF may be negative due to indirect effects. So, conditional on AMF being negative, one should expect the indirect effects would dominate the direct ones. This means a good candidare for "Minus AMF", an organisation whose value is symmetric to that of AMF, would have both direct and indirect effects symmetric to those of AMF. The name For Malaria Foundation (FMF) suggested to me an organisation whose interventions have direct effects with similar magnitude, but opposite sign of those of AMF. However, the negative indirect effects of intentionally increasing malaria deaths seem worse than the negative of the positive indirect effects of decreasing malaria deaths[1]. So, AMF being negative would imply FMF having positive direct effects, but in this case I would expect FMF's indirect effects to be sufficiently negative for it to be overall net negative. 1. ^ I am utilitarian, but recognise saving a life, and abstaining from saving a live can have different indirect consequences.

Thanks, yeah, I remember liking that paper. Though I'm inclined to think you should assign (precise) higher-order probabilities to the various "admissible probability functions", from which you can derive a kind of higher-order expected value verdict, which helpfully seems to avoid the problems afaict?

General lesson: if we don't have any good way of dealing with imprecise credences, we probably shouldn't regard them as rationally mandatory. Especially since the case for thinking that we must have imprecise credences (i.e., that any kind of precision is necessarily irrational) seems kind of weak.

2
MichaelStJules
2mo
I worry that this is motivated reasoning. Should what we can justifiably believe will happen as a consequence of our actions depend on whether it results in satisfactory moral consequences (e.g. avoiding paralysis)?

I'm a bit surprised that this is getting downvoted, rather than just disagree-voted. It's fine to reach a different verdict and all, but y'all really think the methodological point I'm making here shouldn't even be said?  Weird.

2
MichaelStJules
2mo
I didn't downvote, but if I had, it would be because I don't think it's surely false that "that a rational beneficent agent might just as well support the For Malaria Foundation as the Against Malaria Foundation", and that claim seems overconfident. (Or, rather, AMF could be no better than burning money or the Make a Wish Foundation, even if all are better than FMF, in case there is asymmetry between AMF and FMF.) I specifically worry that AMF could be bad if and because it hurts farmed animals more than it helps people, considering also that descendants of beneficiaries will likely consume more factory farmed animal products, with increasing animal product consumption and intensification with economic development. Wild animal (invertebrate) effects could again go either way. If you're an expectational total utilitarian or otherwise very risk-neutral wrt aggregate welfare, then you may as well ignore the near term benefits and harms and focus on the indirect effects on the far future, e.g. through how it affects the EA community and x-risks. (Probably FMF would have very bad community effects, worse than AMF's are good relative to more direct near term effects, unless FMF quietly acts to convince people to stop donating to AMF.) And I say this as a recurring small donor to malaria charities including AMF. I think AMF can still be a worthwhile part of a portfolio of interventions, even if it turns out to not look robustly good on its own (it could be that few things do). See my post Hedging against deep and moral uncertainty for illustration.
3
Vasco Grilo
3mo
Hi Richard, Is this a fair comparison? For readers' context, Andreas compares the Against Malaria Foundation (AMF) with Make-A-Wish Foundation: I agree increasing malaria is surely worse than decreasing malaria, but I would not say Make-A-Wish Foundation is surely worse than AMF. Given this distinction, I (lightly) downvoted your comment.
Load more