Near-term focus, robustness, and flow-through effects

by AshwinAcharya 10mo4th Feb 20198 comments

26


I recently read Open Phil's 2018 cause prioritization update, which describes their new policy of splitting funding into buckets that make donations according to particular worldviews. They divide worldviews into animal-inclusive vs human-centric, as well as long-term vs near-term focused. I think these two factors cover a lot of the variance with EA viewpoints, so they're pretty interesting to examine. As someone who's generally pretty focused on the long-term, I found this a good jumping-off point for thinking about arguments against it, as well as general concerns about robustness and flow-through effects.

The discussion of near-term focus brings up many good points I've heard against naive long-term-EV maximization. It's hard to predict the future; it means you can't get feedback on how your actions turn out; acting on these predictions has a mixed or bad track record; it can involve confusing moral questions like the value of creating new beings. Aiming simply to preserve civilization runs the risk of carrying bad values into the far future; aiming at improving human values or averting worst cases gives you an even harder target to hit.[1]

On a more intuitive level, it feels uncomfortable to be swayed by arguments that imply you can achieve astronomical amounts of value, especially if you think you're vulnerable to persuasion; if so, a sufficiently silver-tongued person can convince you to do anything. You can also couch this in terms of meta-principles, or outside view on the class of people who thought they'd have an outsize impact on future utility if they did something weird. (I'm not sure what the latter would imply, actually; as Zhou Enlai never said, "What's the impact of the French Revolution? Too soon to tell.")

I think these are mostly quite good objections; if other long-termist EAs are like me, they've mostly heard of these arguments, agreed with them, adjusted a bit, and continued to work on long-term projects of some sort.

The part of me that most sympathizes with these points is the one that seeks robustness and confidence in impact. It's hard for me to adapt to cluster-thinking, which I suspect underlies strong near-termist positions, so I mostly think of this as a constrained optimization problem: either minimizing max badness given some constraint on EV, or maximizing EV - [robustness penalty]. If you don't include a heavy time discount, though, I think it's plausible that this still leads you to "long-term-y" interventions, such as reducing international tension or expanding moral circles of concern. This is partly due to the difficulty of accounting for flow-through effects. I confess I haven't thought too much about those for short-term human-focused interventions like global health and poverty, but my sense is that unless you optimize fairly hard for good flow-through you're likely to have a nontrivial chance of negative effects.

Another way of thinking about this is to consider what you should have done as an EA at some point in the past. It seems plausible that, while you might not be able to avert nuclear or AI catastrophe directly in 1500, you could contribute to meaningful moral growth, or to differential advances in e.g. medicine (though now we're already in the realm of plausible negative flow-through, via earlier bioweapons -> death, offense-favoring dynamics, lack of norms against "WMDs" as a category). Maybe it's more obvious that ministering to the poor and sick that you could would be the best thing?

I haven't built up much knowledge or deep consideration about this, so I'm quite curious what you guys think. If you support short-termism, is it mainly out of robustness concerns? How do you deal with flow-through uncertainty in general, and how do you conceptualize it, if naive EV maximization is inadequate? Open Phil's post suggests capping the impact of an argument at 10-100x the number of persons alive today, but choosing benchmarks/thresholds/tradeoffs for this kind of thing seems difficult to do in a principled way.

[1] Another object-level point, due to AGB, is that some reasonable base rate of x-risk means that the expected lifespan of human civilization conditional on solving a particular risk is still hundreds or thousands of years, not the astronomical patrimony that's often used to justify far-future interventions. Of course, this applies much less if you're talking about solving an x-risk in a way that reduces the long-term base rate significantly, as a Friendly AI would.