Sometimes, as a way for me to get more strategic clarity about what intermediate goals I want my research to accomplish, I try to evaluate whether things that are locally good/bad are good for the long-term future. For example, technological growth, economic growth, forecasting, IIDM, democracy, poverty reduction, non-existential catastrophic risks, and so forth.
My standard argument/frame of thinking goes like this:
“Well, you start with a prior of ~50-50 that any given macroscopic change is good for the long-term future, and then you update on the evidence that-”
And if this is done in conversation, my interlocutor often interrupts me with
“50-50 is a crazy prior because-”
And often it’s some argument that locally good things should be expected to be globally good. Sometimes people reference flow-through effects.There’s different flavors of this, but the most elegant version I’ve heard is “duh, good things are good.”
And like, I sort of buy this somewhat. I think it’s intuitive that good things are good, and I’ve argued before that we should start with an intuition that first-order effects (for a specific target variable) are higher than second-order effects. While that argument is strongest about local variables, perhaps we should expect that generally there’s a correlation between a thing’s goodness on one metric to its goodness on other metrics (even if the tails come apart and things that are amazing for one metric aren't the best for other metrics).
But when it comes to the long-term future, how much should we buy that things that are considered good by near-term proxies that don’t consider the long-term future are good for long-term stuff?
Put another way, what’s our prior that “an arbitrarily chosen intervention that we believe to be highly likely to be net positive for the experience of sentient beings in the next 0-5 years increases the likelihood of P(utopia)?"
--
Related post (though I don't exactly agree with the formalisms)
I'm broadly in your camp, i.e. starting with a 50-50 prior.
I think a useful intuition pump is asking oneself whether some candidate good near-term effect/action X is net good for total insect welfare or total nematode welfare over the next 0-5 years (assuming these are a thing, i.e. that insects or nematodes are sentient). I actually suspect the correlation here is even smaller than for the short-term vs long-term impact variables we typically consider, but I think it can be a good intuition pump because it's so tangible.
I agree with "first-order effects are usually bigger than second-order effects", but my model here is roughly that we have heavy-tailed uncertainty over 'leverage', i.e. which variable matters how much for the long-term future (and that usually includes sign uncertainty).
We can imagine some literal levers that are connected to each other through some absurdly complicated Rube Goldberg machine such that we can't trace the effect that pulling on one lever is going to have on other levers. Then I think our epistemic position is roughly like "from a bunch of experience with pulling levers we know that it's a reasonable prior that if we pull on one of these levers the force exerted on all the other levers is much smaller, though sometimes there are weird exceptions; unfortunately we don't really know what the levers are doing, i.e. for all we know even a miniscule force exerted – or failing to exert a miniscule force – on one of the levers destroys the world".