Thanks for writing this! I find it really striking how academic critics of longtermism (both Thorstad and Schwitzgebel spring to mind here) don't adequately consider model uncertainty. It's something I also tried to flag in my old post on 'X-risk agnosticism'.
Tarsney's epistemic challenge paper is so much better, precisely because he gets into higher-order uncertainty (over possible values for the crucial parameter "r" which includes the persisting risk of extinction, in the far future, despite our best efforts).
In general (whether realist or anti-realist), there is "no clear link" between axiological certainty and oppressive behavior, precisely because there are further practical norms (e.g. respect for rights, whether instrumentally or non-instrumentally grounded) that mediate between evaluation and action.
You suggest that it "seems only intuitive/natural" that an anti-realist should avoid being "too politically certain that what they believe is what everyone ought to believe." I'm glad to hear that you're naturally drawn to liberal tolerance. But many human bei...
I agree with what you say in the last paragraph, including the highlighting of autonomy/placing value on it (whether in a realist or anti-realist way).
I'm not convinced by what you said about the effects of belief in realism vs anti-realism.
If you hold fixed people's first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.
Sure, but that feels like it's begging the question.
Let's grant that the people we're comparing already have liberal intuitions. After all, this discussion started in a ...
We just wrote a textbook on the topic together (the print edition of utilitarianism.net)! In the preface, we briefly relate our different attitudes here: basically, I'm much more confident in the consequentialism part, but sympathetic to various departures from utilitarian (and esp. hedonistic) value theory, whereas Will gives more weight to non-consequentialist alternatives (more for reasons of peer disagreement than any intrinsic credibility, it seems), but is more confident that classical hedonistic utilitarianism is the best form of consequentialism.
I agree it'd be fun for us to explore the disagreement further sometime!
This is really sad news. I hope everyone working there has alternative employment opportunities (far from a given in academia!).
I was shocked to hear that the philosophy department imposed a freeze on fundraising in 2020. That sounds extremely unusual, and I hope we eventually learn more about the reasons behind this extraordinary institutional hostility. (Did the university shoot itself in the financial foot for reasons of "academic politics"?)
A minor note on the forward-looking advice: "short-term renewable contracts" can have their place, especially for...
On your second point, FHI had at least ~£10m sitting in the bank in 2020 (see below, from the report). So the fundraising freeze, while unusual, wasn't terminal. A rephrasing of your question is "What adminstrative and organisational problems at FHI could possibly have prompted the Faculty to take the unusual step of a hiring and fundraising freeze in 2020, and why could it not be resolved over the next two to three years?"
..."Open Philanthropy became FHI’s most important funder, making two major grants: £1.6m in 2017, and £13.3m in 2018. Indeed, the donation
I don't necessarily disagree with any of that, but the fact that you asserted it implicates you think it has some kind of practical relevance which is where I might want to disagree.
I think it's fundamentally dishonest (a kind of naive instrumentalism in its own right) to try to discourage people from having true beliefs because of faint fears that these beliefs might correlate with bad behavior.
I also think it's bad for people to engage in "moral profiling" (cf. racial profiling), spreading suspicion about utilitarians in general based on very speculative...
fwiw, I wouldn't generally expect "high confidence in utilitarianism" per se to be any cause for concern. (I have high confidence in something close to utilitarianism -- in particular, I have near-zero credence in deontology -- but I can't imagine that anyone who really knows how I think about ethics would find this the least bit practically concerning.)
Note that Will does say a bit in the interview about why he doesn't view SBF's utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from...
Note that Will does say a bit in the interview about why he doesn't view SBF's utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).
I disagree with Will a bit here, and think that SBF's utilitarian beliefs probably did contribute significantly to what happened, but perhaps somewhat indirectly, by 1) giving him large...
Yes, I agree it seems important to have marketers and PR people to craft persuasive messaging for mass audiences. That's not what I'm trying to do here, and nor do I think it would make any sense for me to shift into PR -- it wouldn't be a good personal fit. My target audience is academics and "academic-adjacent" audiences, and as a philosopher my goal is to make clear what's philosophically justified, not to manipulate anyone through non-rational means. I think this is an important role, for reasons explained in some of the footnotes to my posts there. But I also agree it's not the only important role, and it would plausibly be good for EA to additionally have more mass-market appeal. It takes all sorts.
fyi, I weakly downvoted this because (i) you seem like you're trying to pick a fight and I don't think it's productive; there are familiar social ratcheting effects that incentivize exaggerated rhetoric on race and gender online, and I don't think we should encourage that. (There was nothing in my comment that invited this response.) (ii) I think you're misrepresenting Trace. (iii) The "expand your moral circle" comment implies, falsely, that the only reason one could have for tolerating someone with bad views is that you don't care about those harmed by t...
I'd just like to clarify that my blogroll should not be taken as a list of "worthy figure[s] who [are] friend[s] of EA"! They're just blogs I find often interesting and worth reading. No broader moral endorsement implied!
fwiw, I found TracingWoodgrains' thoughts here fairly compelling.
ETA, specifically:
...I have little patience with polite society, its inconsistencies in which views are and are not acceptable, and its games of tug-of-war with the Overton Window. My own standards are strict and idiosyncratic. If I held everyone to them, I'd live in a lon
Thanks, that's very helpful! I do want my points to be forceful, but I take your point that overdoing it can be counterproductive. I've now slightly moderated that sentence to instead read, "Wenar is here promoting a general approach to practical reasoning that is systematically biased (and predictably harmful as a result): a plain force for ill in the world."
Right, that's why I also take care to emphasize that responsible criticism is (pretty much) always possible, and describe in some detail how one can safely criticize "Good Things" without being susceptible to charges of moral misdirection.
Thanks, that's helpful feedback. I guess I was too focused on making it concise, rather than easily understood.
This is an important point. People often confuse harm/benefit asymmetries with doing/allowing asymmetries. Wenar's criticism seems to rest on the latter, not the former. Note that if all indirect harms are counted within the constraint against causing harm, almost all actions would be prohibited. (And on any plausible restriction, e.g. to "direct harms", it would no longer be true that charities do harm. Wenar's concerns involve very indirect effects. I think it's very unlikely that there's any consistent and plausible way to count these as having dispropo...
I found it a bit hard to discern what constructive points he was trying to make amidst all the snark. But the following seemed like a key passage in the overall argument:
...Making responsible choices, I came to realize, means accepting well-known risks of harm. Which absolutely does not mean that “aid doesn’t work.” There are many good people in aid working hard on the ground, often making tough calls as they weigh benefits and costs. Giving money to aid can be admirable too—doctors, after all, still prescribe drugs with known side effects. Yet what no one in
I was disappointed GiveDirectly wasn't mentioned given that seems to be more what he would favour. The closing anecdote about the surfer-philosopher donating money to Bali seems like a proto-GiveDirectly approach but presumably a lot less efficient without the infrastructure to do it at scale.
There was meant to be an "all else equal" clause in there (as usually goes without saying in these sorts of thought experiments) -- otherwise, as you say, the verdict wouldn't necessarily indicate underlying non-utilitarian concerns at all.
Perhaps easiest to imagine if you modify the thought experiment so that your psychology (memories, "moral muscles", etc.) will be "reset" after making the decision. I'm talking about those who would insist that you still ought to save the one over the two even then -- no matter how the purely utilitarian considerations play out.
It's fine to offer recommendations within suboptimal cause areas for ineffective donors. But I'm talking about worldview diversification for the purpose of allocating one's own (or OpenPhil's own) resources genuinely wisely, given one's (or: OP's) warranted uncertainty.
It's always better for a view to be justified than to be unjustified? (Makes it more likely to be true, more likely to be what you would accept on further / idealized reflection, etc.)
The vast majority of worldviews do not warrant our assent. Worldview diversification is a way of dealing with the sense that there is more than one that is plausibly well-justified, and warrants our taking it "into account" in our prioritization decisions. But there should not be any temptation to extend this to every possible worldview. (At the limit: some are outright bad or evil. More moderately: others simply have very little going for them, and would not be worth the opportunity costs.)
I was replying to your sentence, "I'd guess most proponents of GHD would find (1) and (2) particularly bad."
I don't really know enough about the empirics to add much beyond the possible "implications" flagged at the end of the post. Maybe the clearest implication is just the need for further research into flow-through effects, to better identify which interventions are most promising by the lights of reliable global capacity growth (since that seems a question that has been unduly neglected to date).
Thanks for flagging the "sandboxing" argument against AW swamping of GHD. I guess a lot depends there on how uncertain the case for AW effectiveness is. (I didn't ha...
Thanks, I agree that those are possible arguments for the opposing view. I disagree that anyone needs to "prove" their position before believing it. It's quite possible to have justified positive credence in a proposition even if it cannot be decisively proven (as most, indeed, cannot). Every possible position here involves highly contestable judgment calls. Certainly nothing that you've linked to proves that human life is guaranteed to be net-negative, but you're still entitled to (tentatively) hold to such pessimism if your best judgment supports that conclusion. Likewise for my optimism.
I'm curious why you think Singer would agree that "the imperative to save the child's life wasn't in danger of being swamped by the welfare impact on a very large number of aquatic animals." The original thought-experiment didn't introduce the possibility of any such trade-off. But if you were to introduce this, Singer is clearly committed to thinking that the reason to save the child (however strong it is in isolation) could be outweighed.
Maybe I'm misunderstanding what you have in mind, but I'm not really seeing any principled basis for treating "saving ...
Yeah, I don't think most people's motivating reasons correspond to anything very coherent. E.g. most will say it's wrong to let the child before your eyes drown even if saving them prevents you from donating enough to save two other children from drowning. They'd say the imperative to save one child's life isn't in danger of being swamped by the welfare impact on other children, even. If anyone can make a coherent view out of that, I'll be interested to see the results. But I'm skeptical; so here I restricted myself to views that I think are genuinely well-justified. (Others may, of course, judge matters differently!)
Thanks! I should clarify that I'm trying to offer a principled account that can yield certain verdicts that happen to align with commonsense. But I'm absolutely not trying to capture common-sense reasoning or ideas (I think those tend to be hopelessly incoherent).
So yes, my framework assumes that long-run effects matter. (I don't think there's any reasonable basis for preferring GHD over AW if you limit yourself to nearterm effects.) But it allows that there are epistemic challenges to narrowly targeted attempts to improve the future (i.e. the traditional ...
I don't have anything as precise as a definition, but something in the general vicinity of [direct effects on individual welfare + indirect effects on total productivity, which can be expected to improve future welfare].
It's not a priori that GiveWell is justified by any reasonable EA framework. It is, and should be, open to dispute. So I see the debate as a feature, not a bug, of my framework. (I personally do think it is justified, and try to offer a rough indication of why, here. But I could be wrong on the empirics. If so, I would accept the implicatio...
Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve "global capacity", and that a better way would be to pursue more controversial/speculative causes like population growth or long-run economic growth or whatever. I just don't see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuf...
I think it may be important to draw a theory/practice distinction here. It seems completely undeniable in theory (or in terms of what is fundamentally preferable) that instrumental value matters, and so we should prefer that more productive lives be saved (otherwise you are implicitly saying to those who would be helped downstream that they don't matter). But we may not trust real-life agents to exercise good judgment here, or we may worry that the attempts would reinforce harmful biases, and so the mere attempt to optimize here could be expected to do mor...
I'm suggesting that they should change their honest beliefs. They're at liberty to burn their money too, if they want. But the rest of us are free to try to convince them that they could do better. This is my attempt.
One could argue that with somewhat fewer kids, the society could provide better nutrition, education, health care, and other inputs that are rather important to adult capacity and flourishing.
I think that's an argument worth having. After all, if the claim were true then I think that really would justify shifting attention away from infant mortality reduction and towards these "other inputs" for promoting human flourishing. (But I'm skeptical that the claim is true, at least on currently relevant margins in most places.)
Oops, definitely didn't mean any derogation -- I'm a big fan of moonshots, er, speculative high-uncertainty (but high EV) opportunities! [Update: I've renamed them to 'High-impact long-shots'.]
I disagree on "capacity growth" through: that one actually has descriptive content, which "common-sense global interventions" lacks. (They are interventions to achieve what, exactly?)
I guess I have (i) some different empirical assumptions, and (ii) some different moral assumptions (about what counts as a sufficiently modest revision to still count as "conservative", i.e. within the general spirit of GHD).
To specifically address your three examples:
So I'm not really seeing anything "bad" here.
I didn't say your proposal was "bad", I said it wasn't "conservative".
My point is just that, if GHD were to reorient around "reliable global capacity growth", it would look very different, to the point where I think your proposal is better described as "stop GHD work, and instead do reliable global capacity growth work", rather than the current framing of "let's reconceptualize the existing bucket of work".
To clarify: I'm definitely not recommending "shunning" anyone. I agree it makes perfect sense to continue to refer to particular cause areas (e.g. "global health & development") by their descriptive names, and anyone may choose to support them for whatever reasons.
I'm specifically addressing the question of how Open Philanthropy (or other big funders) should think about "Worldview Diversification" for purposes of having separate funding "buckets" for different clusters of EA cause areas.
This task does require taking some sort of stand on what "worldviews" are sufficiently warranted to be worth funding, with real money that could have otherwise been used elsewhere.
Especially for a dominant funder like OP, I think there is great value in legibly communicating its honest beliefs. Based on what it has been funding in GH&D, at least historically, it places great value on saving lives as ~an end unto itself, not as a means of improving long-term human capacity. My understanding is that its usual evaluation metrics in GH&D have reflected that (and historic heavy dependence on GiveWell is clearly based on that). Coming up with some sort of alternative rationale that isn't the actual rationale doesn't feel honest, t...
Two main thoughts:
(1) If building human capacity has positive long-term ripple effects (e.g. on economic growth), these could be expected to swamp any temporary negative externalities.
(2) It's also not clear that increasing population increases meat-eating in equilibrium. Presumably at some point in our technological development, the harms of factory-farming will be alleviated (e.g. by the development of affordable clean meat). Adding more people to the current generation moves forward both meat eating and economic & technological development. It doesn...
Hi Nick, I'm reacting especially to the influential post, Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, which seems to me to frame the issues in the ways I describe here as "orthodox". (But fair point that many supporters of GHD would reject that framing! I'm with you on that; I'm just suggesting that we need to do a better job of elucidating an alternative framing of the crucial questions.)
I currently think the experience of being human might be many orders of magnitude more valuable than any other animal (I reject hedonism)
Thanks,...
Hi! Yeah, as per footnote 3, I think the "reliable capacity growth" bucket could end up being more expansive than just GHD. (Which is to say: it seems that reasons of principle would favor comparing these various charities against each other, insofar as we're able.) But I don't have a view on precisely where to draw the line for what counts as "reliable" vs "speculative".
Whether causes like FF and SM belong in the "reliable capacity growth" or "pure suffering reduction" buckets depends on whether their beneficiaries can be expected to be more productive. I...
Thanks, this has been a helpful discussion.
I agree that most GHD donors don't consciously conceive of things as I've suggested. But I think the most coherent idealization of their preferences would lead in the direction I'm suggesting. It's even possible that they are subconsciously (and imperfectly) tracking something like my suggestion. It would be interesting to see whether most accept or reject the idea that fetal anesthesia or (say) elder care are "relevantly similar" to saving children. Since metrics like QALYs (esp. for young people) and incom...
I think the comparison group should be based on principle, rather than pragmatic considerations of which bucket you'd rather divert funds from!
If it's true that GHD funds should be diverted to AW funds, then they should be diverted to AW funds, not to a very poor substitute for an AW cause.
I personally think it isn't obvious that GHD funds should be so diverted, precisely because of their greater potential for flow-through effects. But of course if that is the basis for GHD funding having a lower "bar" than AW funding, it cannot justify applying the low (G...
Thanks for this. I find it very strange that fetal anesthesia isn't standard here: unless there's some countervailing medical reason (risk to the mother?) or very significant expense involved, it seems like a clear moral improvement.
...see whether advocacy for fetal anesthesia is cost-effective enough to be competitive with leading global health interventions.
fwiw, I think a better comparison would be leading animal welfare interventions. Those seem more similarly targeted at raw suffering-reduction, whereas most "global health interventions" serve to incr...
I very much agree that it's a clear moral improvement unless there's some strong countervailing consideration. I would guess the greatest practical difficulty would be the intervention's adjacency to politically contentious issues, which might make it intractable.
fwiw, I think a better comparison would be leading animal welfare interventions
I agree that there are many similarities between this proposal and animal welfare interventions. However, since I think the best animal welfare interventions are orders of magnitude more effective than GHD, I'd far rath...
People often feel an obligation not to delay after they've received funding
Thanks for flagging this! As a purely forward-looking matter (not blaming anyone), I'd now like to explicitly push back against any such norm. For comparison: it's standard in academia for grant-funded projects to begin the following academic year after grant funding is received (so, often 6 months or more).
This delay is necessary because it's not feasible for universities to drop a planned class at the last minute, after students have already enrolled in it. But independent contrac...
I think that would be a big step forward- and it might not even be a change in policy, just something that needs to be said more explicitly.
I don't think it solves the entire problem, but at a certain point I just need to write my Why Living On Personal Grants Sucks post.
This is a very unfortunate situation, but as a general piece of life advice for anyone reading this: expressions of interest are not commitments and should not be "interpreted" -- let alone acted upon! -- as such.
For example, within academia, a department might express interest in having Prof X join their department. But there's no guarantee it will work out. And if Prof. X prematurely quit their existing job, before having a new contract in hand, they would be taking a massive career risk!
(I'm not making any comment on the broader issues raised here; I sy...
On one hand, I agree with you that expressions of interest or even intent are different than commitments, and commitments are different from money in hand. I wish we had exact quotes to figure out what interpretations were justified, but it's certainly possible Caleb's communication was precise and Igor read too much into it.
OTOH, there is an embedded problem here. If the grant were approved, it would be unethical to drop patients in favor of EAs. Igor's choices were to behave unethically, stop taking new clients before the grant was approved, or del...
Thanks! But to clarify, what I'm wondering is: why take unrealized probabilities to create ex post complaints at all? On an alternative conception, you have an ex post complaint if something bad actually happens to you, and not otherwise.
(I'm guessing it's because it would mean that we cannot know what ex post complaints people have until literally after the fact, whereas you're wanting a form of "ex post" contractualism that is still capable of being action-guiding -- is that right?)
"50 people wouldn’t actually die if we don’t choose the AI research, instead, 100 million people would face a 0.00005% chance of death."
I'm a bit puzzled by talk of probabilities ex post. Either 100 million people die or zero do. Shouldn't the ex post verdict instead just depend on which outcome actually results?
(I guess the "ex post" view here is really about antecedently predictable ex post outcomes, or something along those lines, but there seems something a bit unstable about this intermediate perspective.)
Alice, Charles and Mike cooperate in this charity. The participation of all is indispensable for the outcome. So they each have a counterfactual impact on 1 animal.
If each of them were to assume to have offset one previous animal product consumption of theirs through this project, that would be triple counting. For this reason counterfactual values of donations shouldn't be used in offsetting calculations.
I'm not sure about this. Suppose that C & M are both committed to offsetting their past consumption, and also that both will count the present co-ope...
Great piece. The reflections on how movements look from the outside vs from the inside seemed very insightful.
I also liked this point about applied moral philosophy: "there are many situations in which utilitarianism guides my thinking, especially as a philanthropist, but uncertainty still leaves me with many situations where it doesn’t have much to offer. In practice, I find that I live my day to day deferring to side constraints using something more like virtue ethics. Similarly, I abide by the law, rather than decide on a case by case basis whether breaking the law would lead to a better outcome. Utilitarianism offers an intellectual North Star, but deontological duties necessarily shape how we walk the path."
If you're worried that a real-life FMF would not be truly symmetrical to AMF in its effects, just mentally replace it with "Minus AMF" in my original comment. (Or imagine stipulating away any such differences.) It doesn't affect the essential point.
Thanks for explaining!
It is a fair comparison. Andreas' relevant claim is that it isn't clear what the sign of the effect from AMF is. If AMF is negative, then its opposite--FMF--would presumably be positive.
Thanks, yeah, I remember liking that paper. Though I'm inclined to think you should assign (precise) higher-order probabilities to the various "admissible probability functions", from which you can derive a kind of higher-order expected value verdict, which helpfully seems to avoid the problems afaict?
General lesson: if we don't have any good way of dealing with imprecise credences, we probably shouldn't regard them as rationally mandatory. Especially since the case for thinking that we must have imprecise credences (i.e., that any kind of precision is necessarily irrational) seems kind of weak.
I'm a bit surprised that this is getting downvoted, rather than just disagree-voted. It's fine to reach a different verdict and all, but y'all really think the methodological point I'm making here shouldn't even be said? Weird.