Alice and Bob have joint ownership of a pie, and they would like to split it. If they want to do anything other than split it evenly, there are two common approaches:
- Altruism. They talk it out, determine who likes pie more, and implement the socially optimal allocation.
- Economic self-interest. Each gets half of the pie, and if one of them likes pie more they can buy the other out. Each acts in their own self-interest.
I think the best approach is in between:
- Economic altruism. Each says how much they would be willing to pay for the pie. The price of the pie is the average, and the high bidder buys out the loser.
[ETA: Jess points out that Bethany already has a nice post about very similar ideas.]
This scheme has three excellent properties:
- It is optimal, if everyone behaves honestly.
- It involves only local decisions. Each participant only has to answer: how much do I want X?
- It is robust: if someone behaves dishonestly, makes mistakes, or communicates incorrectly, the worst they can do is “walk away” with their half of the pie.
Economic self-interest fails on count [1]. Whenever there is a surplus to be divided, economically rational agents will fight to capture more of it. And in their efforts, they will typically make things worse. Alice will overstate how much she likes pie to sell it for more, and Bob will understate how much he likes pie to buy it for less. And in the end, too little pie will be exchanged. Not sharing information is another canonical way to capture a large share of a smaller surplus.
Altruism fails on counts [2] and [3]. If you can count on everyone to be honest, then altruism gets you to an optimal outcome — but so does economic altruism. And thinking about what other people want is hard. In practice, I would expect the altruists to just split the pie fairly, which is uninspiring.
The general principle is: “Make good deals, but don’t worry about how you split the surplus.” The part where you split the surplus evenly isn’t really important, though 50–50 is a nice Schelling point.
Economic behavior can help make efficient, robust decisions. You don’t need to give up on altruism to get those benefits. And conversely, altruism can grease the wheels of trade. I think economic altruism is a good attitude towards many interactions between nice people. If you can’t see how it applies to a particular situation, odds are you aren’t looking hard enough.
I almost stopped there and successfully wrote a short post. But I can't resist raising and addressing possible problems. Read on only if you have the stamina for it.
Problems
Issue 1: Quantifying things is hard. Alice might have an easier time thinking qualitatively about what Bob wants than thinking quantitatively about what Alice wants. But based on experience I think that this is a matter of practice — and that it’s good practice.
Perhaps the bigger problem is that when people quantify they also feel the need to be precise. When they can’t be precise, they don’t want to quantify, and when they see something quantified, they assume it is precise. This is a trap. It doesn’t matter that much if you guess wrong about which of Alice and Bob most wants the pie. And it doesn’t matter that much if you guess wrong about whether the pie is worth $1 or $3. Just go with your best guess; it’s what you were doing anyway.
Taking time to make a better estimate is useful if it helps you better allocate the pie — but that would have been helpful even if you weren’t quantifying. Taking time to make a better estimate might also help you get exactly what you are owed, rather than a noisy estimate; but just because you are being quantitative doesn't mean you have to start fighting over the surplus. (Though see the discussion of adverse selection below.)
Quantifying willingness to pay also makes our motives more explicit, where we might prefer them remain implicit. I'm not going to dwell on this point, but I will suggest that transparency is actually an important social good, and we should actually try to encourage it. It's unrealistic to be transparent about everything, but we can at least be a bit more transparent.
Comparing between “Alice gets a pie” and “Bob gets a pie” doesn’t seem fundamentally easier than comparing between “Alice gets a pie” and “Alice gets $2.” And if Alice can’t make the second kind of decision, she has bigger problems than being unable to efficiently split a pie with Bob.
Issue 2: People don’t like dealing with money. That’s fine — economics isn’t about money. If Alice and Bob often interact, they can trade pie for hamburgers or labor or whatever.
Or they can introduce a new currency only for use in Alice-Bob trades. The hard part is specifying how Alice and Bob should make decisions about this fictional currency, given that it has no intrinsic value. I’ll leave the details to the reader or to a future post. (One short answer is “logarithmic returns to fictional money”.) Also note the analogy with real currency.
Or two altruists can trade certificates of impact if they both use that system. You can do the same with donations; this has a number of problems, but might still work better than directly exchanging cash.
Issue 3: When dealing with profit-seekers who know more than you, you need a bid-ask spread to compensate for adverse selection. If someone is willing to sell you a car, it’s more likely to be junk. This breaks property [3]: it is possible for a dishonest participant to do harm. So with a profit-seeking counterparty, you should be willing to buy only at a strictly lower price than you are willing to sell. As a result, deals happen less often than would be optimal.
Homo economicus only holds back information to capture more of the surplus. So altruists share relevant information. And when dealing with an altruist, you don't want a spread beyond the transaction costs.
To be very explicit about it, you might even ask "is there anything else I should know?", behave altruistically if you get a straight answer, and only be vulnerable to active malice.
If I’m dealing with a stranger from the internet, then I anticipate adverse selection from the outset. Being nice doesn’t mean being a fool. But for repeat interactions, I normally assume innocence and only defect when it becomes necessary---e.g. if you and I interact every day, I won't charge you a spread until I see that it is necessary.
Issue 4: Does this ever come up? Is there really room for improvement? I think so. And if you have it in mind — if you compare the decisions we actually make to the best decisions we could make — I think that you too will see it all around you.
Have you heard about how Beeminder cofounders Danny and Bethany use exactly this to split up chores between them? http://messymatters.com/autonomy
Thanks for not stopping at the short post. The second half did good work in stopping me spending too long exploring the issues with the idea.
I think this is a fruitful direction to think in.
Of relevance, my house - four aspiring effective altruists - split our rent using this fair division calculator
Both in this post and in the certificates of impact one I don't see exactly where you want to get with all this. If the goal is to figure out how to share pies, it seems that the mental trigger we use to think of that is not only economic, but moral too. The whole book Moral Tribes is an attempt to give us a currency that can be used to trade among agents who think that the "right way to share pies" is different.
Finding elegant mathematical solutions for altruistic problems may be an interesting topic for a successor of Freakonomics, or for a successor of those complex economic theories that only physicists who become economists understand: occasionally they end up giving them Nobel prizes, but the theories rarely have any practical outcome.
These two posts seem different in fundamental ways from most of your writings at Ordinary Ideas and Rational Altruist, and I would like to know where are you trying to make progress, and what you would expect if progress was made in that area (by you or others who further develop these ideas).