www.jimbuhler.site
Also on LessWrong (with different essays).
(Hey Vasco!) How resilient is your relatively high credence that AI timelines are long?
And would you agree that the less resilient it is, the more you should favor interventions that are also good under short AI timelines? (E.g., the work of GiveWell's top charities over making people consume fewer unhealthy products, since the latter pays off far later, as you and Michael discuss in this thread.)
it seems pretty likely to me that aquatic noise reduces populations (and unlikely that it increases them), both fish and invertebrates, by increasing mortality and reducing fertility.
What about trophic cascades? Maybe the populations most directly affected and reduced by aquatic noise were essential for keeping overall wild animal populations down?
Do you think aquatic noise is like some specific forms of fishing that determinately reduce overall populations? Is it because you think it directly affects/reduces all populations (unlike some other specific forms of fishing) such that trophic cascades can hardly compensate?
if we're clueless whether Emily will feel pain or not then the difference disappears. In this case I don't have the pro-not-shooting bracketing intuition.
Should this difference matter if we're not difference-making risk-averse or something? In both cases, C is better for Emily in expectation (the same way reducing potential termite suffering is better for termites, in expectation, even if it might make no difference because they might not be sentient).
Now, new thought experiment. Consider whatever intervention you find robustly overall good in the near-term (without bracketing out any near-term effect) and replace A, B, and C with the following:
Do you have the pro-C' intuition, then? If yes, what's different from the sniper case?
Interesting! This resembles Michael St. Jules' hedging proposal.
But the animal representatives, even if they aren't thrilled by retaining another omnivore, have more pressing priorities than trying to help animals by eliminating meat-eaters one by one.
Is it that obvious? I find it hard to come up with interventions that reduce farmed animal suffering (and farmed animal suffering only, ignoring all the rest) more robustly than "minimize the number of entities that incidentally perpetuate this farmed animal suffering". I feel like this should actually be pretty high up the list for someone who strictly spotlights farmed animals (like the farmed animal representatives are supposed to in your spotlighting parliament, if I understand correctly).
Why don't the managers of the Animal Welfare Fund support preventing the existence of more potential meat-eaters, then? (via, e.g., increasing x-risks.) I presume because
Curious if you agree.
For the moral trade between the representatives of human victims of malaria and farmed animal representatives to be fair, in your setup, the preferences of the latter would have to actually stoplight farmed animals the same way the former spotlights human victims of malaria. I.e., the preferences of farmed animal representatives in your spotlighting parliament should not be those of real farmed animal advocates who are not spotlighting farmed animals (otherwise, they would obviously be pro-x-risks and stuff despite the downsides on other beings, the same way the representatives of human malaria victims are anti-poverty despite the meat-eater pb).
I would still say there are actions which are robustly beneficial in expectation, such as donating to SWP. It is possible SWP is harmful, but I still think donating to it is robustly better than killing my family, friends, and myself, even in terms of increasing impartial welfare.
It's kinda funny to reread this 6 months later. Since then, the sign of your precise best guess flipped twice, right? You argued somewhere (can't find the post) that shrimp welfare actually was slightly net bad after estimating that it increases soil animal populations. Later, you started weakly believing animal farming actually decreases the number of soil nematodes (which morally dominate in your view), which makes shrimp welfare (weakly) good again.
(Just saying this to check if that's accurate because that's interesting. I'm not trying to lead you into a trap where you'd be forced to buy imprecise credences or retract the main opinion you defend in this comment thread. As I suggest in this comment, let's maybe discuss stuff like this on a better occasion.)
I suspect Vasco is reasoning about the implications of epistemic principles (applied to our evidence) in a way I'd find uncompelling even if I endorsed precise Bayesianism.
Oh so for the sake of argument, assume the implications he sees are compelling. You are unsure about whether your good epistemic principles E imply (a) or (b).[1]
So then, the difference between (a) and (b) is purely empirical, and MNB does not allow me to compare (a) and (b), right? This is what I'd find a bit arbitrary, at first glance. The isolated fact that the difference between (a) and (b) is technically empirical and not normative doesn't feel like a good reason to say that your "bracket in consequentialist bracketing" move is ok but not the "bracket in ex post neartermism" move (with my generous assumptions in favor of ex post neartermism).
I don't mean to argue that this is a reasonable assumption. It's just a useful one for me to understand what moves MNB does and does not allow. If you find this assumption hard to make, imagine that you learn that we likely are in simulation that is gonna shut down in 100 years and that the simulators aren't watching us (so we don't impact them).
I find impartial consequentialism and indeterminate beliefs very well-motivated, and these combined with consequentialist bracketing seem to imply neartermism (as Kollin et al. (2025) argue), I think it’s plausible that metanormative bracketing implies neartermism.
Say I find ex post neartermism (Vasco's view that our impact washes out, ex post, after say 100 years) more plausible than consequentialist bracketing being both correct and action-guiding.
My favorite normative view (impartial consequentialism + plausible epistemic principles + maximality) gives me two options. Either:
Would you say that what dictates my view on (a)vs(b) is my uncertainty between different epistemic principles, such that I can dichotomize my favorite normative view based on the epistemic drivers of (a)vs(b)? (Such that, then, MNB allows me to bracket out the new normative view that implies (a) and bracket in the new normative view that implies (b), assuming no sensitivity to individuation.)
If not, I find it a bit arbitrary that MNB allows your "bracket in consequentialist bracketing" move and not this "bracket in ex post neartermism" move.
Spent some more time thinking about this, and I think I mostly lost my intuition in favor of bracketing in Emily's shoulder pain. I thought I'd share here.
In my contrived sniper setup, I've gotta do something, and my preferred normative view (impartial consequentialism + good epistemic principles + maximality) is silent. Options I feel like I have:
All these options feel arbitrary, but I have to pick something.
Picking D demands accepting the arbitrariness of letting perfect randomness guide our actions. We can't do worse than this.[2] It is the total-arbitrariness baseline we're trying to beat.
Picking A or B demands accepting the arbitrariness of favoring one over the other, while my setup does not give me any good reason to do so (and A and B give opposite recommendations). I could pick A by sorta wagering on, e.g., an unlikely world where the kid dies of Reye's syndrome (a disease that affects almost only children) before the potential bullet hits anything. But I could then also pick B by sorta wagering on the unlikely world where a comrade of the terrorist standing near him turns on him and kills him. And I don't see either of these two wager moves as more warranted than the other.[3]
Picking C, similarly, demands accepting the arbitrariness of favoring it over A (which gives the opposite recommendation), while my setup does not give me any good reason to do so. I could pick C by wagering on, e.g., an unlikely world where time ends between the potential shot hurting Emily's shoulder and the moment the potential bullet hits something. But I could then also pick A by wagering on the unlikely world where the kid dies of Reye's syndrome anyway. And same pb as above.[4] And this is what Anthony's first objection to bracketing gestures at, I guess.
While I have a strong anti-D intuition with this sniper setup, it doesn't favor C over A or B for me, at the very moment of writing.[5]
Should we think that our reasons for C are "more grounded" than our reasons for A, or something like that? I don't see why. Is there a variant of this sniper story where it seems easier to argue that it is the case (while conserving the complex cluelessness assumption)? And is such a variant a relevant analogy to our real-world predicament?
Without necessarily assuming persons-based bracketing (for A, B, or C), but rather whatever form of bracketing results in ignoring the payoffs associated with one or two of the three relevant actors.
Our judgment calls can very well be worse than random due to systematic biases (and I remember reading somewhere in the forecasting literature that this happens). But if we believe that’s our case, we can just do the exact opposite of what our judgment calls say and this beats a coin flip.
It feels like I’m just adding non-decisive mildly sweet considerations on top of the complex cluelessness pile I already had (after thinking about the different wind layers, the Earth's rotation, etc). This will not allow me to single out one of these considerations as a tie-breaker.
This is despite some apparent kind of symmetry existing only between A and B (not between C and A) that @Nicolas Mace recently pointed to in some doc comment---symmetry which may feel normatively relevant although it feels superficial to me at the very moment of writing.
In fact, given the apparent stakes difference between Emily’s shoulder pain and where the bullet ends, I may be more tempted to act in accordance to A or B, deciding between the two based on what seems to be the least arbitrary tie-breaker. However, not sure whether this temptation is, more precisely, one in favor of endorsing A or B, or in favor of rejecting cluelessness and the need for bracketing to begin with, or something else.
Nice, thanks! (I gave examples of charities/work where you're kinda agnostic because of a crux other than AI timelines, but this was just to illustrate.)
I had no doubts you thought this! :) I'm just curious as to whether you see reasons for someone to optimize assuming long AI timelines, despite low resilience in their high credence in long AI timelines.