www.jimbuhler.site
Also on LessWrong (with different essays).
...if you think welfare is net positive either way, yes. This seems like a tough case to make. I see how one can opt for agnosticism over believing net negative but I doubt there exists anything remotely close to a good case that WAW currently is net positive (and not just highly uncertain).
We take the meat-eater problem[3] seriously, but we don't at all think that the conclusion is to avoid donating in the Global Health and Development (GHD) space: the effects might actually even out if e.g. further development reduces the total amount of natural space, potentially counterbalancing increased meat consumption by reducing the number of suffering wild animals.
Is the positive effect on wild animal welfare really your crux for finding GHD net positive? If yes, that means you think WAW is more pressing than improving human health. And it feels weird to advocate for improving human health despite the meat-eating pb because of wild animal suffering. If you really think that, it feels like you should just advocate for reducing wild animal suffering instead (unless you think GDH happens to be the best way to do that).
I think if we only do spatiotemporal bracketing, it tells us to ignore the far future and causally inaccessible spacetime locations, because each such location is made neither determinately better off in expectation nor determinately worse off in expectation.
Oh helpful thanks, this reasoning also works in my sniper case, actually. I am clueful about the "where Emily is right after she potentially shoots" ST location so I can't bracket out the payoff attached to her shoulder pain. This payoff is contained within this small ST region. However, the payoffs associated with where the bullet ends aren't neatly contained in small ST regions the same way! I want the terrorist dead because he's gonna keep terrorizing some parts of the world otherwise. I want the kid alive to prevent the negative consequences (in various ST regions) associated with an innocent kid's death. Because of this, I arguably can't pin down any specific ST location other than "where Emily is right after she potentially shoots" that is made determinately better or worse off by Emily taking the shot. Hence, ST bracketing would allow C but not A or B.
To the extent that I'm still skeptical of C being warranted, it is because:
And I guess all this also applies to A' vs B' vs C' and whether to bracket out near-term effects. Thanks for helping me identify these cruxes!
I'll take some more time to think about your point about bracketing out possibilities and AGI by date X.
And that's one way to interpret Anthony's first objection to bracketing? I can't actually pin down a specific ST location (or whatever value-bearer) where donating to AMF is determinately bad, but I still know for sure such locations exist! As I think you alluded to elsewhere while discussing ST bracketing and changes to agriculture/land use, what stops us from acting as if we could pin down such locations?
If you weren't doing [B] with moral weights, though, you would presumably have to worry about things other than effects on soil animals. So, ultimately, [B] remains an important crux for you.
(You could still say you'd prioritize decreasing uncertainty on moral weights if you thought there was too much uncertainty to justify doing [B], but the results from such research might never be precise enough to be action-guiding. You might have to endorse B despite the ambiguity, or one of the three others.)
Extinction forecloses all option value — including the option for future agents to course-correct if we've made mistakes. Survival preserves the ability to solve new problems. This isn't a claim about net welfare across cosmic history; it's a claim about preserving agency and problem-solving capacity.
I think it still implicitly is a claim about net welfare across the cosmos. You have to believe that preserving option value will actually eventually lead to higher net welfare across the cosmos[1]---belief which I argue relies on judgment calls. (And the option-value argument for x-risk reduction was kind of already infamously known as a bad one in the GPR literature, including among x-risk reducers.)
You might say individuals can act on non-longtermist grounds while remaining longtermist-clueless. But this concedes that something must break the paralysis, and I'd argue that "preserve option value / problem-solving capacity" is a principled way to do so that doesn't require the full judgment-call apparatus you describe.
Nice, that's the crux! Yeah so I tentatively find something like bracketing out long-term effects more principled (as a paralysis breaker) than option-value preservation. I have no clue whether reducing the agony of the many animals we can robustly help in the near term is overall good when considering the indirect long-term effects, but I find doing it anyway far more justifiable than "reducing x-risks and let future people decide what they should do". I would prefer the latter if I bought the premises of the option-value argument for x-risk reduction, but I wouldn't be clueless and wouldn't have a paralysis problem to begin with, then.
I don't see any good reason to believe enabling our descendants is impartially better than doing the exact opposite (both positions rely on judgment calls that seem arbitrary to me). However, I see good (non-longtermist) reasons to reduce near-term animal suffering rather than increase it.
Unless you intrinsically value the existence of Earth-originated agents or something, and in a way where you're happy to ignore the welfarist considerations that may leave you clueless on their own. In this case, you obviously think reducing P(extinction) is net positive. But then,
Nice, thanks! (I gave examples of charities/work where you're kinda agnostic because of a crux other than AI timelines, but this was just to illustrate.)
Assuming that saving human lives increases welfare, I agree doing it earlier increases welfare more if TAI happens earlier.
I had no doubts you thought this! :) I'm just curious as to whether you see reasons for someone to optimize assuming long AI timelines, despite low resilience in their high credence in long AI timelines.
(Hey Vasco!) How resilient is your relatively high credence that AI timelines are long?
And would you agree that the less resilient it is, the more you should favor interventions that are also good under short AI timelines? (E.g., the work of GiveWell's top charities over making people consume fewer unhealthy products, since the latter pays off far later, as you and Michael discuss in this thread.)
Nice, thanks for engaging! :)
It sounds like this is actually the core crux of your view, then. If so, it might be worth making that explicit in the post. As it stands, the discussion of WAW could give the impression that it plays a more decisive role in your evaluation than it ultimately does, whereas your judgment seems to rest mainly on the effects on human welfare, given what you say here.
I also think this position of yours (that is now revealed) invites further scrutiny. Given how many more animals are plausibly affected by GDH compared to humans, concluding that AW is not the most important factor appears to rely on specific assumptions about moral weights that privilege humans to an extent that would be very controversial if it were made explicit. It could be helpful to spell those assumptions out, or at least acknowledge that they’re doing significant work here.