Davidmanheim

5638Joined Oct 2018

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
648

You're right that Shapley values are the wrong tool - thank you for engaging with me on that, and I have gone back and edited the post to reflect that! 

I'm realizing as I research this that the problem is that act-utilitarianism fundamentally fails for cooperation, and there's a large literature on that fact[1] - I need to do much more research.

But "just model counterfactuals better" isn't a useful response. It's just saying "get the correct answer," which completely avoids the problem of how to cooperate and how to avoid the errors I was pointing at.

  1. ^

    Kuflik, A. (1982). Utilitarianism and large-scale cooperation. Australasian Journal of Philosophy, 60(3), 224–237. 
     
    Regan, Donald H., 'Co-operative Utilitarianism Introduced', Utilitarianism and Co-operation (Oxford, 1980)

    Williams, Evan G. "Introducing Recursive Consequentialism: A Modified Version of Cooperative Utilitarianism." The Philosophical Quarterly 67.269 (2017): 794-812.

In a single person game, or one where we're fully aligned and cooperating, we get to choose N. We should get to the point where we're actively cooperating, but it's not always that easy. And in a game-theoretic situation, where we're only in control of one party, we need a different approach than either saying we can choose where to invest last, when we can't, and I agree that it's more complex than Shapley values.

I'm frustrated by your claim that I make strong claims and assumptions, since it seems like what you disagree with me about are conclusions you'd have from skimming, rather than engaging, and being extremely uncharitable. 

First, yes, cooperation is hard, and EAs do it "partially."  I admit that fact, and it's certainly not the point of this post, so I don't think we disagree. Second, you're smuggling the entire argument into  "correctly assessed counterfactual impact," and again, sure, I agree that if it's correct, it's not hyperopic - but correct requires a game theoretic approach, which we don't generally use in practice.  

Third, I don't think we should just use Shapley values, which you seem to claim I believe. I said in the conclusion, "I'm unsure if there is a simple solution to this," and I agreed that it's relevant only to where we have goals that are amenable to cooperation. Unfortunately, as I pointed out, in exactly those potentially cooperative scenarios, it seems that EA organizations are the ones attempting to eke out marginal attributable impact instead of cooperating to maximize total good done. I've responded to the comment about Toby's claims, and again note that those comments are assuming we're not in a potentially cooperative scenario, or that we are pretending we get to ignore the way others respond to our decisions over time. And finally, I don't know where your attack on economists is coming from, but it seems completely unrelated to the post. Yes, we need more practical work on this, but more than that, we need to admit there is a problem, and stop using poorly reasoned counterfactuals about other group's behavior - something you seem to agree with in your comment.

As I said in the post, "I'm unsure if there is a simple solution to this, since Shapley values require understanding not just your own strategy, but the strategy of others, which is information we don't have. I do think that it needs more explicit consideration..."

You're saying that "if you're maximizing this only over your own actions and their consequences, including on others' responses (and possibly acausal influence), it's just maximizing expected utility."

I think we agree, modulus the fact that we're operating in conditions where much of the information we need to "just" maximize utility is unavailable.

In scenarios where we are actually cooperating, maximizing Shapley values is the game-theoretic optimal solution to maximize surplus generated by all parties - am I misunderstanding that? And since we're not interested in maximizing our impact, we're interested in improving the future, that matters more than maximizing what you get credit for.

So yes, obviously, if you're pretending you're the only funder, or stipulating that you get to invest last and no one will react in future years to your decisions, then yes, it "is likely to lead to worse outcomes overall." But we're not in that world, because we're not playing a single player game.

Let's make the problem as simple as possible; you have a simple intervention with 3 groups pursuing it. Each has an independent 50% chance of success,  per superforecasters with an excellent track record, and talking about it and coordinating doesn't help, because each one is a different approach that can't benefit from coordination.

And I agree with you that there are cases where it's the wrong tool - but as you said, I think "EA is now often in situations like these," and we're not getting the answer right! 

Yes, this is a reason that in practice, applying Shapley values will be very tricky - you need to account for lots of details. That's true of counterfactuals as well, but Shapley values make it even harder. (But given that we're talking about Givewell and similar orgs allocating tens of millions of dollars, the marginal gain seems obviously worth it to me.)

It's a problem with using expected utility maximization in a game theoretic setup without paying attention to other players' decisions and responses - that is, using counterfactuals which don't account for other player actions, instead of Shapley values, which are a game theoretic solution to the multi-agent dilemma. 

Sorry to revive a dead comment just to argue, but I'm going to disagree about the claims made here for most of what EA as a movement does, even if it's completely right in many narrowly defined cases.

In most cases where we see that other funders have committed their funds before we arrive, you say that we should view it counterfactually. I think this is probably myopic. EA is a large funder, and this is an iterated dilemma - other actors are 'live' in the relevant sense, and will change their strategies based on knowing our decisions. The cooperative and overall better solution, if we can get other actors to participate in this pareto-improving change in strategy, is to explicitly cooperate, or at least embrace a decision theory that lets us do so.  

(See the discussion here that pointed me back to this comment, where I make a similar argument. And in the post, I point to where Givewell is actively using counterfactual reasoning when the other decisions are most certainly 'live', because again, it's an iterated game, and the other funders have already said they are adjusting their funding levels to account for the funding that EA provides.)

The most important thing about your decision theory is that it shouldn't predictably and in expectation leave you worse off than if you had used a different approach. My claim in the post is that we're using such an approach, and it leaves us predictably worse off in certain specific cases.

For example, I strongly disagree with the idea that it's coherent to say that all three programs would have zero value in hindsight, and that the true value is 12.5% each, because it means that in many plausible cases, where return on investment from a single working solution is, say, only 3x the bar for funding, we should fund none of them. 

And regarding Toby's comment, I agree with him - the problem I pointed to in the last section is specifically and exactly relevant - we're committing to things on an ongoing and variable basis, along with others. It's a game-theoretic setup, and as he suggests, "Shapley [should be] applied when the other agents' decisions are still 'live'" - which is the case here. When EA was small, this was less problematic. We're big enough to factor into other large player's calculus now, so we can't pretend we move last in the game. (And even when we think we know we are, in fact, moving last in committing funds after everyone else, it is an iterated game, so we're not actually doing so.)

Load More