| This is a Draft Amnesty Week draft. It's short and drafty. My initial draft was partly rewritten with Claude, then manually re-edited again by me. Less than 10% of the sentences are "raw" Claude outputs from my draft. |
Motivations for this post: I've heard multiple animal welfare donors and advocates (including me) state that they have prioritized interventions that pay off quicker now that they believe that transformative AI will come quite soon. I think the picture might be somewhat more complicated than the simple case makes it to be, though, as always, I'm not sure what the bottom line is. Assumptions: I'm not questioning whether it's possible that AI will transform the world (or whether one should care about animal welfare). If you don't share this premises, the post will be a bit boring. Another assumption I realize I've made is that it isn't 90%+ likely that TAI will change all the elements of the world that are relevant to animal welfare in the next 20 years. This is actually a crux, and if you disagree, then the points in this post are mostly unjustified, save for the last subpart. |
How much should uncertainty about the next decade shift animal welfare strategy toward short-term interventions? I think it is strongly likely that the future beyond ten years will be hard to affect. This partly because of AI, but also because of other potential disruptions unrelated to AI, such as a major economic crisis or great power conflict.
Does that mean we should go all in on theories that, all else equal, pay off in less than five years? I think there is a case for that, which has been made before.
Expected utility maximization doesn't obviously support this
We could be overestimating the chance that the world changes significantly, but I don't think many people put the probability that nothing changes very significantly over ten years at less than 10%. In that case, a long-term intervention that pays off at more than five times the expected value of a short-term one should still be preferred.
TAI may not be a crux
However, we may still want to go short-term. Shrimp stunning, or ensuring that there is more pasture land on the margin, are not interventions that look massively worse than interventions that pay off in ten or twenty years, even setting aside TAI.
This is because whenever a theory of change relies on a 15-year horizon or much more, as Wild Animal Initiative's does, it faces significant conjunctive risk: every independent condition must hold year after year. Funding flows, political stability, democratic institutions, continued human influence over natural environments, policymaking affected by environmental considerations... Each of these elements are much likelier than not to survive each year, but the small probability that one or two of these levers break compounds over time and raises the chance that by the time these interventions can pay off, the main levers are broken. In my view, TAI simply increases the discount rate we should apply. Whether this should dominate our consequentialist considerations probably depends on our rough EV estimates.
Neglectedness?
Most funders are not currently optimizing hard for the short term, so marginal resources directed there may be particularly valuable. Funders who believe TAI is near should be especially inclined toward near-term interventions.
Is it actually about difference-making risk aversion?
Beyond expected value, difference-making risk aversion (DMRA) — the aversion to supporting interventions unlikely by default to make a positive difference — plays a real role. Very few people think non-transformed-world scenarios are so improbable that medium-term interventions look inevitably bad in EV terms. But they may balk at supporting interventions with, say, a 5% chance of paying off in the way it intended, even if the math is favorable. Why's that?
A pro-nearterm heuristic?
Beyond the fact that uncertainty compounds over time, another heuristic may vindicate a near-term focus: responsiveness to changes in the world. Say, before taking TAI into account, we're picking research on wild animal welfare over shrimp stunning (with large error bars on the expected value of both). But then, we realize that TAI timelines are likely to give a much sharper discount rate to the wild animal welfare intervention. It's reasonable to think that if the discount rate is sharp enough, we should then prefer shrimp stunning over it, even if it's hard to put a precise number on it.
Option value
Option value may be worth considering for larger funders. If funding to WAI dries up entirely and closes the window on wild animal welfare as a field, that looks costly at even a 10% probability that WAI's levers remain relevant over the next twenty years. A small allocation to preserving optionality seems worthwile, but it could also constitute double-counting on EV (I haven't thought of this properly enough).
Is there anything we should prefer?
Short-term interventions offer better feedback loops, faster learning, and greater resilience to disruption. AI's impact probably makes them higher-EV than medium-term interventions relative to prior ratios, though not dramatically so, perhaps by a factor of three or four. Other heuristics reinforce this on the margin. But it is unlikely that all 10+ year interventions are ruled out. However, there are important questions that I haven't covered, and I wonder what role they play.
Maybe no current animal welfare intervention has positive EV
Our uncertainty about animal minds, moral weights, and the sign of many interventions is severe enough that EV calculations might not even be the right metric, compared to say, bracketing (though bracketing is not unproblematic). It's not clear what it tells us to do, but in my view, it'd probably lead us somewhat away from focusing on near-term interventions, as the possibility of better interventions in the future would now look comparatively more attractive (though the problems with current interventions might get worse in the future, as I mention below)
Maybe most EV lies in small odds of positively shaping post-TAI worlds after the dust settles
The case for influencing post-TAI outcomes before TAI seems weak to me, because our understanding of how to do so is highly sensitive to individual judgment calls and resists robust action-guidance. The implication is less to redirect toward animal welfare interventions of any horizon and more to preserve organizational flexibility and financial runway, staying positioned to act if the the picture gets clearer (and if human-influenced decision-makers are still around). I'm not very compelled by this, mainly because acting in robust and evidence-based ways is hard. Thus, if the world changes significantly enough, it'd probably take time to find new levers that do good effectively, and we'd get a lot wrong in the meantime.
Thank you very much for reading! Have a nice day/
