If we had any way of tractably doing anything with future AI systems, I might think there was something meaningful to talk about for "futures where we survive."
See my post here arguing against that tractability.
we can make powerful AI agents that determine what happens in the lightcone
I think that you should articulate a view that explains why you think AI alignment of superintelligent systems is tractable, so that I can understand how you think it's tractable to allow such systems to be built. That seems like a pretty fundamental disconnect that makes me not understand your )in my view, facile and unconsidered) argument about the tractablity of doing something that seems deeply unlikely to happen.
There is a huge range of "far future" that different views will prioritize differently, and not all need to care about the cosmic endowment at all - people can care about the coming 2-3 centuries based on low but nonzero discount rates, for example, but not care about the longer term future very much.
You make a dichotomy not present in my post, then conflate the two types of interventions while focusing only on AI risk - so that you're saying that two different kinds of what most people would call extinction reduction efforts are differently tractable - and conclude that there's a definition confusion.
To respond, first, that has little to do with my argument, but if it's correct, your problem is with the entire debate week framing, which you think doesn't present two distinct options, not with my post! And second, look at the other comments which bring up other types of change as quality increasing, and try to do the same analysis, without creating new categories, and you'll understand what I was saying better.
I agree that the examples you list are ones where organizing and protest played a large role, and I agree that it's effectively impossible to know the counterfactual - but I was thinking of the other examples, several where there was no organizing and protest, but which happened anyways - which seems like clear evidence that they are contributory and helpful but not necessary factors. On the other hand, effectiveness is very hard to gauge!
The conclusion is that organizing is likely or even clearly positive - but it's evidently not required, if other factors are present, which is why I thought it was overstated.
This seems great, but it does something which is kind of indefensible that I keep seeing, assuming longtermism requires consequentialism.