Hide table of contents

Short summary

A moral public good is something many people want to exist for moral reasons—for example, people might value poverty reduction in distant countries or an end to factory farming. 

If future people care somewhat about moral public goods, but care more about idiosyncratic selfish goods, then there may be significant gains from them coordinating to fund moral public goods. Even though it’s in each individual's personal interests to fund selfish goods, everyone is better off if they all switch to funding moral public goods.

Ensuring that this coordination happens seems potentially very important for how well the future goes.

We tentatively think that this argument suggests distributing power relatively widely (so that there are more gains from trade), while improving our ability to coordinate to fund moral public goods. It also suggests encouraging evidential cooperation in large worlds (ECL).

Long summary

Suppose that after the intelligence explosion there's a society of a million people each deciding what to do with a distant galaxy they own. Every person can use their resources to either simulate themselves (“self-sims”) or create something that everyone values, perhaps hedonium or civilizations of happy, flourishing people (“consensium”[1]). Assume for now that they value both goods linearly, but value their own self-sims a thousand times as much as consensium and value others’ self-sims negligibly.

Absent trade, everyone spends all their resources on self-sims. But they could instead agree to spend everything on consensium. Although they value consensium a thousand times less than self-sims, they get a million times as much of it by participating in the trade—a thousand-fold increase in value by each person’s lights!

In general terms, rather than each party pursuing idiosyncratic goods (valued only by them), everyone agrees to pursue consensus goods (valued by everyone). This is a form of moral trade, which might have especially large gains from trade when people have linear preferences in both idiosyncratic and consensus goods. We’re excited about this both because we think that linear preferences are reasonably likely and because we think that other methods of moral trade work less well when all participants have linear preferences.[2]

Consensium is a type of public good. Everyone derives value from the existence of consensium, whether or not they contributed to funding it. We call goods like consensium moral public goods.

Comparison of no trade vs trade for moral public goods. Without trade, each funds self-sims (utility = 1). With trade, all fund consensium valued by everyone, raising utility per person to 1000.

We’ve presented a stylized trade-off between something totally particular (“self-sims”) and something totally universal (“consensium”). In practice, there's probably a spectrum.[3] Mutually beneficial trades can occur anywhere along this spectrum, whenever people shift resources from more idiosyncratic to more widely valued goods.

Of course, this requires that people have both idiosyncratic and consensus goals. It’s not totally clear that this will be true. Maybe everyone’s values will fully converge, and they’ll spend all their resources pursuing those shared values, without any need for trade. Or maybe everyone’s values will entirely diverge, leaving them with no shared goals at all. In that case, coordinating on moral public goods isn’t possible.

But we think it's reasonably likely that people will continue to have both idiosyncratic and widely shared preferences. If so, these trades could matter a lot for whether the future goes well.

Some strategic implications:

  1. Distribute power widely.[4] The more people who share power, the greater the gains from trade, and the more likely that people switch from funding idiosyncratic goods to consensus goods. So this is a general argument in favour of distributing power as widely as possible, as long as large-scale coordination is possible—which we think is doable via taxation. 
  2. But avoid highly fragmented governance. You only get to capture these large gains from trade if you’re actually able to coordinate. This speaks against highly decentralized approaches—whether libertarian futures where individuals have total control of their own resources, or massively multipolar worlds with millions of independent polities and no mechanism to compel contributions. Funding public goods is hard because everyone has a strong incentive to free-ride: in the toy example, each person prefers that everyone else switch to consensium while they keep funding self-simulations. Historically, the scalable method for funding public goods has been governments that force individuals to contribute.

    Combining this point with the previous point, moral public goods are most likely to be funded if power is broadly distributed but the government can tax people to fund consensus goods that they vote for.[5]

  3. Develop voluntary mechanisms for funding moral public goods. Coordination technology might eventually solve the free-rider problem and allow people to make deals to fund moral public goods without government coercion. We're excited about research in this direction, though we think the free-rider problem is surprisingly hard to escape.
  4. Encourage ECL. Evidential Cooperation in Large Worlds (ECL)[6] combines evidential decision theory[7] with the notion that the multiverse may contain huge numbers of agents with decision procedures correlated with yours.

    ECL plausibly provides a very strong mechanism for funding moral public goods. If you shift $1 from something only you value to something valued by all correlated agents, they do the same. This gets you a large increase in consensus goods for a small sacrifice of idiosyncratic goods—a great deal by your lights. With many correlated agents who have diverse idiosyncratic values but share your consensus goals, the multiplier is potentially huge (e.g.  of consensium for each $1 you move away from self-sims).

  5. It might matter less how much people prioritize consensus goods, and more what those consensus goods actually are. In the past, we've worried that even if there’s widespread moral convergence, people might still prioritize other goals like personal consumption, status competitions, or idiosyncratic ideological projects. But the argument above suggests that if enough people care about a goal even a little bit, they'll shift all their spending toward it. The difference between a very “selfish” person (who cares very little about consensus goods) and a very “altruistic” one (who cares a lot) might not matter so much, as long as everyone cares at least a bit.

    What does matter is what those consensus goals actually are. There could be substantial differences in value—by our lights—between different conceptions of pleasure, beauty, well-being, or consciousness. And there are potential consensus goals that would be bad or valueless, like sadism or nothingness.

One important qualification: our toy example assumed that people value both idiosyncratic and consensus goods linearly. We're massively uncertain what the structure of people's preferences will look like in the long run, and so we’re uncertain about our conclusions. We checked whether our results held across various classes of plausible-seeming utility functions and, for most of them, coordination and distribution of power were helpful for increasing spending on consensus goods. 

But there are plausible utility functions where these results don't hold. For example, human behavior today can be modeled by preferences that allocate a fixed fraction of resources to each type of good, regardless of price.[8] Under those preferences, a coordination mechanism that effectively makes consensium cheaper wouldn't actually get people to spend more on it. And for some utility functions, broadening the distribution of resources can actually decrease spending on consensus goods, even when coordination is possible. 

The structure of the rest of the note is as follows:

  • We define moral public goods, and clarify their relationship to moral trade.
  • We first assume a specific model of people’s values (where idiosyncratic and consensus preferences are both linear). We show that, in the context of causal trades, moral public goods get the most funding if resources are widely distributed and coordination is possible. We discuss specific mechanisms to enable coordination on moral public goods, including government taxation, social norms, and voluntary deals.
  • Next, we turn to acausal coordination and argue that evidential cooperation in large worlds (ECL) is very well-suited for funding consensus goods.
  • Then we consider how robust our arguments are to our assumptions that people will have linear preferences.
  • Finally, we assess how valuable spending on moral public goods would actually be.

What are moral public goods?

The consensium example from above illustrates a general dynamic that Paul Christiano calls a “moral public good.” Many people may value some goods for moral reasons. No one values the good enough to fund it themselves, but it’s in everyone’s collective interest to fund it. As far as we’re aware, the dynamic was first identified by Milton Friedman,[9] and developed further by other economists.[10] Moral public goods are different from other public goods in that people don’t personally benefit from the good. Instead, they just care intrinsically about the good existing. 

Examples of moral public goods might include existential risk mitigation, poverty relief, environmental protection, art creation, scientific inquiry, and animal welfare improvements. (Although often these are regular public goods, too, since people derive personal benefit from many of these goods. We acknowledge that the distinction is somewhat fuzzy and many people will derive both a personal and moral benefit from the same good—you might personally value not dying in an extinction event and morally value the existence of future people.[11]

Just like other public goods, moral public goods are liable to be underfunded,[12] because of the free-rider problem: everyone prefers paying their share over not getting the good at all, but they prefer even more to let others fund it while they get to keep their money. We currently solve this coordination problem by governments collecting taxes and spending the proceeds on consensus goods. 

We think that public goods, and whether we coordinate to fund them, might be very important for how good the long-run future is. In the future, people may have the opportunity to allocate resources in distant galaxies that they will never personally visit. For those decisions, most of the benefit a decision-maker can derive is moral or ideological, not personal. Thus, we think coordination on shared moral goals is especially important.

How does this relate to moral trade?

Trade over moral public goods is an example of moral trade

Classic cases of moral trade often focus on people trading over idiosyncratic moral preferences. For example, consider two people who each control a galaxy’s resources. One person cares about hedonic pleasure while the other cares about freedom. Left to their own devices, the freedom lover would create a society where everyone is perfectly free, while the hedonic utilitarian would create one where everyone is maximally blissful. But there's an opportunity for trade. The hedonic utilitarian could tweak their society to increase freedom at low cost to pleasure, while the freedom lover could look for ways to increase pleasure without significantly compromising freedom. Both get more of what they want.

This is nice, but the gains seem fairly limited when both parties are trading idiosyncratic goods that they both value linearly. With just two trading partners, even in the most optimistic case—where each party achieves 99.999% of their possible value in both galaxies—trade only gives you a 2x multiplier on value. If you wanted 100x gains from trade, you would need to find a hybrid good that was simultaneously nearly optimal for 100 different value systems. We wouldn't expect one to exist in most cases.

The moral public goods case, in contrast, is a moral trade where people agree to shift resources from idiosyncratic preferences that they individually value highly to consensus preferences that everyone values a little.

Coordinating on moral public goods works especially well when everyone has preferences that are linear in resources (see below)—exactly the case where the gains from coordinating on hybrid goods seem especially limited. It's also easier to scale to huge numbers of trading partners, since everyone just produces whatever best satisfies their shared values rather than needing to find hybrid goods that satisfy many value systems. This scalability matters because gains from trade grow with the number of participants: in our toy example in the summary, a million people coordinating on something they all valued a tiny bit yielded 1000x gains from trade.

The downside of coordinating on moral public goods is that it does require a large number of people to share some consensus preferences. This might not always be true (see below). But when such shared preferences do exist, we expect coordination on moral public goods to yield larger gains from trade than coordination on hybrid goods, at least when there are many participants with linear preferences.

Scenario 1: causal coordination 

For now, we’ll assume that beings with decision-making power have quasilinear preferences over three types of goods. First, there are some goods that they value for self-interested reasons, like food, shelter, and luxuries for their biological self, which exhibit steeply diminishing returns. We’ll call these goods basics. Second, there are some goods that they value for idiosyncratic reasons, which have linear utility. These could include simulations of themselves or people living according to their own culture. We’ll call these goods self-sims. Finally, there are some goods that everyone values linearly. This could be new civilizations crammed with flourishing, joy, adventure, connection, beauty, and so on. We’ll call these goods consensium. Everyone values consensium, but no one values anyone else’s basics or self-sims.

To help us illustrate more concretely, we’ll assume a particular utility function, with  and  representing each person’s basics and self-sims, respectively, and  representing consensium: 

That is: people care a lot about basic goods, but get diminishing utility from them, they care quite a lot about self-sims, and they care only a tiny bit about consensium.

Given this utility function, how do people spend their wealth? Consider three different scenarios. In each scenario, we’ll assume the price of each good is $1, total wealth of $100T, and there are 10B people. (The precise numbers don’t matter; this is just to illustrate.)

ScenarioBasicsSelf-simsConsensium
Single decision-maker controls all resources[13]$400$100T – $400$0
Resources divided evenly among 10B people, no coordination[14]$4T$96T$0
Resources divided evenly among 10B people and they coordinate[15]$1T$0$99T

The key qualitative upshot is this: with good coordination and widely distributed resources, the effective price of the consensus goods drops dramatically. Every $1 you spend on consensium results in $10B going towards it—a 99.99999999% discount.[16] On this model, people buy vastly more consensium, both absolutely and as a share of their budget, than in either the dictatorial or uncoordinated scenario.

This argument suggests we should try to ensure both widely distributed power and good coordination mechanisms for funding public goods.

How widely does power need to be distributed? This depends on how much you expect people to value idiosyncratic goods relative to consensus goods. In our example above, each person valued self-sims 5 billion times as much as they valued consensium, so we needed at least 5 billion people for consensium to get funded at all.

We’re quite uncertain about how much people will value idiosyncratic goods relative to consensus goods. We tentatively think that ratios of a few thousand or a few million seem quite plausible and ratios as high as a few billion are somewhat plausible, so distributing power across thousands, millions, or even billions of people could be valuable.[17]

How to coordinate causally

There are three approaches to funding public goods that might work for moral public goods after the singularity: governments, social norms, and voluntary contracts.

Today, public goods are funded primarily by governments. Governments force everyone to contribute to public goods, regardless of whether they actually value the good. Even in a democracy, a minority’s preferred public goods might go unfunded, while their taxes pay for goods they're indifferent to. It would be better if there were a way to allow arbitrary combinations of individuals to coordinate and fund the goods they collectively value, without forcing contributions from those who do not value the good. 

We were initially optimistic that this would be possible through voluntary contracts. After all, it's in everyone's collective interest to get these goods funded, and we expect that artificial superintelligence (ASI) will be able to resolve some barriers to coordination that prevent mutually beneficial deals today, like transaction costs or difficulties making credible commitments. But it seems surprisingly difficult to get around the free-rider problem. Advanced technology might even open up new ways to free-ride, like self-modifying so that you no longer value the moral public good (see Appendix B for more details on funding moral public goods via voluntary contracts). 

Another approach to funding public goods is social norms. Individuals contribute to public goods to avoid social sanctions, win praise from their peers, or just to live up to their own self-conception as cooperative and norm-abiding. We’re relatively pessimistic about this approach because it seems less scalable and less flexible than either governments or voluntary contracts. Social pressure is probably most effective within social communities, which might cap out the hundreds or thousands. Communities of this size might not include all the people that you’d want to coordinate with. Also, social norms may not be targeted towards funding moral public goods rather than more arbitrary goals. Lastly, social norms also emerge organically, making their terms harder to renegotiate if they prescribe excessively harsh punishments or the wrong level of contributions from individuals.

Some other historical mechanisms for funding public goods make use of them being (partially) excludable.[18] But moral public goods are entirely non-excludable: once the good exists, each person who wanted it now benefits.

Scenario 2: ECL

We might also be able to fund moral public goods through acausal coordination. This section presents one proposal for such coordination, drawing on the idea of evidential cooperation in large worlds (ECL). A core premise of ECL is that there are likely many causally disconnected agents—in civilizations inside our universe but outside our lightcone, civilizations in different Everett branches, or civilizations in other parts of the Tegmark IV multiverse. Each of these agents faces a choice about how to allocate their resources: toward idiosyncratic goods valued only by them, or toward consensus goods that many beings throughout the multiverse would value. We can't causally affect their decisions, but our own choice—whether to fund consensus goods over idiosyncratic ones—provides evidence about what other agents with sufficiently similar decision procedures will choose.

To illustrate, let's return to our toy example where each agent cares about one idiosyncratic good (self-sims) and one consensus good (consensium):

  1. If an agent spends $1 on self-sims, they get evidence that huge numbers of other agents spend on self-sims. But they only value another agent's self-sims if that agent is an exact copy of them.[19] There are some agents who are exact copies—it's a big multiverse—but most of the agents correlated with them aren't exact copies, so those self-sims are worthless to the original agent. Their dollar is matched only by their copies.
  2. If an agent spends $1 on consensium, they get evidence that all those correlated agents shift $1 to consensium too. Unlike self-sims, they care about consensium created by any of those agents. Their dollar is thus matched across the multiverse by anyone whose decision is sufficiently correlated with theirs.

Whether this trade is worthwhile from an agent's perspective depends on the following ratio:

This ratio determines the multiplier they get from coordinating with everyone funding consensium. If the multiplier is large enough to overcome the lower value they place on consensium relative to self-sims, the trade is worthwhile.

(Actually, you should weight each agent by the degree of correlation, but the above formula ignores that for simplicity.[20])

There are many possible trading partners. There are astronomical numbers of possible human genomes and even humans with the same genome might diverge due to different life histories. And there are many other possible minds that we could cooperate with—alien intelligences, AIs, and whatever else might exist. 

If your idiosyncratic values are indexical—you only care about your personal consumption —then you’ll share those values with none of your possible trading partners. But your decision gives you some evidence about what those others decide. The evidence doesn’t even need to be that strong to be significant. Even a 1% correlation could matter a lot when multiplied across huge numbers of potential trading partners. 

Even if your idiosyncratic values aren't indexical—even if they could in principle be shared by agents outside your lightcone—the multipliers might still be large. The space of possible idiosyncratic values is vast. Some agents will share your decision procedure but have different idiosyncratic values. (The authors of this piece disagree about how tightly linked these are in practice, and therefore disagree about the magnitude of the multiplier.)

The ECL case differs from the causal case in several important ways.

First, ECL removes the incentive to free-ride. In the causal story, each agent wants everyone else to fund consensus goods while they buy idiosyncratic goods. Under ECL, this isn't an option. If an agent buys idiosyncratic goods, so does everyone else correlated with them. Thus, the agent is incentivized to pay for consensus goods even without central enforcement.

And with ECL, funding for consensus goods is much less sensitive to the distribution of power on Earth. In the causal case, we only got large "discounts" on consensus goods if power was widely distributed; a single dictator preferred to just fund idiosyncratic goods. But with ECL, even a world dictator gets massive "discounts" on consensus goods from coordinating with others in the multiverse.

Of course, unlike the causal case, whether consensus goods get funded depends on whether agents want to do acausal cooperation at all—which depends on their decision theories and their beliefs about their degree of correlation with others.

Robustness to different structures of preferences

So far we have mostly assumed that people value consensus and idiosyncratic goods linearly. We think that this is plausible. After ASI, people will be extremely wealthy. If they have any linear preferences at all, their spending will mostly be determined by those preferences, since they'll quickly saturate their sublinear ones. And there are theoretical arguments for having linear preferences.[21] Meanwhile, people with sublinear preferences may end up controlling few resources—they'd be less willing to adopt riskier but higher-reward strategies, like trading away guaranteed resources near Earth for resources further out in space that might already be occupied. As such, we expect them to trade away most of their resources to people with linear preferences.  

With linear utility functions, we found that many coordinated people fund more public goods than either a single decision-maker or many uncoordinated people, which suggested that both coordination and wider resource distribution increased funding for public goods. 

We’re quite uncertain about what preference structures humans will have after the singularity. But we checked whether these conclusions held for a few other utility functions that seemed plausible to us. Among the preference structures we checked, enabling coordination was always helpful (or at least not harmful) for increasing spending on consensus goods. However, broadening the distribution of power was sometimes actively counterproductive.

We're quite uncertain about what preference structures humans will have after the singularity, and it's very possible we're missing a common form that future preferences will take. So we remain pretty unsure about the generality of our conclusions.

Chart comparing funding for consensus goods under dictator, many uncoordinated, and many coordinated scenarios across different utility assumptions.

With that caveat in mind, here are the other preference structures we checked:

  1. Preferences with diminishing marginal returns in idiosyncratic and consensus goods. Someone might value many goods—idiosyncratic and consensus—each with its own rate of diminishing marginal returns (DMR). They'll shift marginal spending from idiosyncratic to consensus goods based on the relative marginal returns. Coordination essentially increases the marginal returns on consensus goods by a constant factor (the number of people coordinating), which can shift more spending into consensus goods. So, as in the linear case, coordination is pretty robustly good: it increases, or at least doesn't decrease, spending on public goods. 

    However, in the absence of coordination, widely distributing resources can actually reduce spending on consensus goods. Compare a dictator holding all the resources to  uncoordinated people, each with  of the resources. The dictator will be able to spend more in absolute terms on idiosyncratic consumption, so they experience much lower marginal returns on that consumption and are correspondingly more willing to shift funding toward consensus spending. Intuitively, a single person's idiosyncratic desires saturate faster than  people's combined desires, freeing up more resources for consensus goods.

    So more public goods get funded in a world with a single decision-maker and a world with many coordinated decision-makers, compared to a world with many uncoordinated decision-makers. How does the coordinated multipolar world and the single decision-maker world compare? 

    It depends on the precise shape of the utility function. For some DMR functions—like  or ​—many coordinated people fund more public goods than single dictators (where  is the amount of resources spent on idiosyncratic goods). Here the boost from the coordination matters more than the hit from having to fund many people’s idiosyncratic goods. For other DMR utility functions—e.g.,  for some constant threshold —dictators may fund more consensus goods. See Appendix A for more details.

    (These same conclusions largely apply if someone values consensus goods linearly and has DMR in idiosyncratic goods (or vice versa).)

  2. Preferences to spend fixed fractions of resources on consensus and idiosyncratic goods, regardless of price. This matches how people today typically allocate resources. Even when people learn that certain charities achieve huge amounts of good per dollar, they very rarely reallocate spending between idiosyncratic and consensus goods. This suggests they are not price-sensitive, but rather spend a fixed fraction of their resources on consensus goods regardless of how effectively those resources can be deployed.

    (You can also get this spending pattern if you model a human as containing two sub-agents (one that cares only about idiosyncratic goods, one that cares only about consensus goods) and these sub-agents bargain to determine the human’s actions.[22]

    With this utility function, cheaper public goods make no difference to allocation and coordination doesn’t help. Resource distribution also doesn’t matter—each individual spends the same share of resources on consensus and idiosyncratic goods regardless of how many resources they control.

Convergence and moral public goods funding

Coordination to fund moral public goods isn’t possible if there's full convergence or full divergence. If everyone's values fully converge, they'll spend all their resources pursuing shared goals without any need for trade. If everyone's values fully diverge, there are no shared goals to coordinate on in the first place.

But if a group shares some consensus preferences while retaining different idiosyncratic ones, coordination to shift funding from idiosyncratic goods to consensus goods is possible. Gains from trade are largest if there's widespread convergence on consensus goals. But even with limited convergence, any subset of people with shared consensus goals can still benefit by trading among themselves.

How valuable is it to fund moral public goods?

This depends on how valuable the consensus goods are. 

On subjectivism, if there's widespread convergence, most people will end up valuing those consensus goods—so unless you expect your values to substantially diverge from most people's on reflection, this should be great by your lights. Things are less clear if you expect low convergence, or if you expect to be in the minority. You'll still benefit from coordinating with others who share some consensus goals with you, but other coalitions might fund goods you dislike. 

For example, people might coordinate on excessively punishing wrongdoers (negative value) or leaving large swathes of space as nature preserves (zero value), when we would have preferred that they hadn’t coordinated at all and instead funded personal consumption (weak positive value). But we don’t expect that this effect dominates because in general most people’s values aren’t directly opposed.

Another issue is threats. Just as coordination lets a group do more with a fixed budget by funding shared goals rather than idiosyncratic ones, it might also make it easier to threaten that group with something they all dislike. We don't think this will leave the threatened parties worse off on net by their own lights, but it might be bad for more downside-focused agents. They bear the risk of threats against their values without as much of the corresponding upside. 

Thus far we’ve argued that coordination will improve the value of the future by most people’s lights. But if moral realism is correct, then we should ask whether coordination will lead to the objectively best use of resources. There’s some reason for optimism here: under moral realism, lots of people might place at least some value on the impartially best use of resources, making that a very broadly appealing good.

But it’s unclear that people will coordinate to fund the most broadly appealing goods. People have a range of preferences that vary in how particular or universal they are. Moral public goods mechanisms can shift funding from satisfying more idiosyncratic preferences to more widespread ones—but they don’t necessarily fund the most universal preferences. For some people, the largest gains from trade might come from coordinating with a smaller group with especially similar preferences. If a nationalist values national benefit 100x more than consensium, then they’d rather coordinate with 1 billion fellow nationalists than 10 billion people globally.[23]

And even if the most broadly appealing goods are funded, they might not be the objectively best use of resources. For example, humans might especially value the wellbeing of human-like minds. If coordination is only among humans, then public goods funding might flow toward creating societies of happy humans, even if non-human minds could experience more joy, freedom, or fulfillment per unit resource.

This last concern seems more serious for causal than for acausal coordination. Causal coordination will be limited to humans and AIs originating from Earth. Acausal coordination could involve a much wider variety of minds—aliens with very different biologies and civilizational histories. If we're correlated with them, then we're more likely to end up funding goods that are broadly appealing to all these types of minds, which are more likely to be the morally correct use of resources. But it’s possible that civilizations capable of ECL will tend to share similar values—maybe preferences for stuff that’s instrumentally useful like survival, growth, and knowledge—even if those aren’t objectively valuable.

Conclusion

If large numbers of agents can coordinate to fund goods they all value, this can produce substantial gains from trade. These gains are potentially large enough that even quite selfish actors would devote significant resources to consensus goods. We're excited about this type of trade because it could enable a near-best future by channeling substantial resources toward widely valued goods, even without any single agent heavily prioritizing those goods. This conclusion is most clear-cut when agents have linear utility functions, but probably extends to other plausible utility functions (some utility functions with diminishing returns).

These benefits depend on there being a sufficient number of agents who share some consensus goals, who are able to coordinate. In the causal case, we’re most optimistic about coordination to fund consensus goods if power is widely distributed and there are governments that can collect taxes to fund public goods. We’re excited about further research on voluntary coordination methods, but they will have to deal with incentives to free-ride and/or strategically modify one’s own preferences. In the acausal case, ECL enables large trading coalitions even if there’s extreme power concentration on Earth and eliminates free-rider problems.

This article was created by Forethought. Read the original version including appendices on our website.

  1. ^

    We call the good that best satisfies the people’s shared values “consensium,” after hedonium, the good that best satisfies hedonic utilitarianism.

  2. ^

    See below for a comparison with another type of moral trade where people fund “hybrid” goods that simultaneously satisfy multiple value systems.

  3. ^

    From most idiosyncratic to most broadly appealing, this spectrum could include: copies of yourself; societies of humans who share your nationality, culture, or ideology; societies of human-like minds; experiences that maximize value according to a widely shared (but not universal) ethical system; and activities that maximize value according to the objectively true ethical system (if there is one).

  4. ^

    Of course, this argument in favour of power distribution should be balanced with the many other considerations about the optimal distribution of power.

  5. ^

    This minimal government structure could also help with other public goods for spacefaring societies, like preventing vacuum decay.

  6. ^

    The concept originates from this paper, where it's called "multiverse-wide superrationality." This blog post offers an accessible explanation.

  7. ^

    The principle that you should act as you'd want all agents with sufficiently similar decision procedures to act, since your choices are evidence about theirs.

  8. ^

    For example, people today rarely massively increase the percentage of their income donated to charity after learning that charities are much more effective than they previously believed.

  9. ^

    Chapter 12 of "Capitalism and Freedom" (1962): "It can be argued that private charity is insufficient because the benefits from it accrue to people other than those who make the gifts- again, a neighborhood effect. I am distressed by the sight of poverty; I am benefited by its alleviation; but I am benefited equally whether I or someone else pays for its alleviation; the benefits of other people's charity therefore partly accrue to me. To put it differently, we might all of us be willing to contribute to the relief of poverty, provided everyone else did. We might not be willing to contribute the same amount without such assurance. In small communities, public pressure can suffice to realize the proviso even with private charity. In the large impersonal communities that are increasingly coming to dominate our society, it is much more difficult for it to do so.”

    It’s ironic that the target of Christiano’s argument, who overlooks this dynamic, is David Friedman, Milton Friedman’s son.

  10. ^

    E.g. Hochman & Rodgers (1969), “Pareto Optimal Redistribution”.

  11. ^

    You might also experience a warm glow from having helped prevent extinction. We classify this as a private good, as it’s excludable—only the people who contributed the funding get to enjoy the satisfaction of having helped out.

  12. ^

    That is, funded below the socially optimal amount, the level where total benefits equal the total costs.

  13. ^

    The marginal returns on self-sims (0.025) are always higher than those on consensium (), so no money gets spent on consensium. The marginal returns on self-sims are higher than the marginal returns on basics  when . So the dictator spends $400 on basics and then the rest is spent on self-sims.

  14. ^

    Each decision-maker has a budget of $100T/10B = $10,000. By the same reasoning as the previous footnote, each person spends $400 on basics and the rest of their budget ($9,600) on self-sims. So across 10B people, $4T is spent on basics and $96T is spent on self-sims.

  15. ^

    Once everyone is coordinating, a person who spends an extra dollar effectively causes 10B dollars to be spent on consensium. The value of spending a dollar on consensium is thus . Since this exceeds the marginal return on self-sims (0.025), no money gets spent on self-sims. And since 0.05 exceeds the marginal return on basics  when , each person spends $100 on basics and the rest on consensium.

  16. ^

    Thanks to Toby Ord for this framing.

  17. ^

    There might be benefits to increasing the number of powerholders even beyond what’s needed to make consensium worth funding. More people means larger gains from trade, which could make coordination more attractive. For example, in Appendix C, we investigate an assurance contract for funding public goods and find that—holding fixed the ratio of value assigned to idiosyncratic goods and consensium—public goods are more likely to be funded with larger numbers of people, due to the greater gains from trade. Of course, larger groups also have a harder time coordinating. In our analysis of the assurance contract, we found that the larger gains from trade outweighed the difficulties in coordinating, but this might not hold for other mechanisms.

  18. ^

    For example, lighthouses may have been historically funded by harbor fees. This made them partially excludable, since only ships that came into the harbor and paid the fee would get the full benefit of a nearby lighthouse.

  19. ^

    Or they might not even value that—maybe they only value self-sims causally downstream of themselves.

  20. ^

    The degree of correlation between you and another agent  is the extent to which you update on that ’s decision after observing your own. In this case, it is Pr( funds consensium | you fund consensium) - Pr( funds consensium | you do not fund consensium).

  21. ^

    First, among views of population ethics that satisfy some standard technical axioms, only those that are linear with respect to population size (at a given level of wellbeing) are separable in space and time—that is, the value of doing good today doesn't depend on the amount of good in distant galaxies or in the distant past. See Blackorby, Bossert, and Donaldson’s Population Issues in Social Choice Theory.

    Second, even if you think that maximum attainable value is a concave function of resources devoted to promoting the good, if the total amount of goodness in the universe is much larger than the amount you can affect, then you will value the differences you can make approximately linearly (because concave functions are locally approximately linear). And, plausibly, the total amount of goodness in the universe is much larger than the amount you can affect. See No Easy Eutopia for more discussion.

  22. ^

    Let’s model someone as containing two sub-agents with equal weight, one that cares about idiosyncratic goods with a utility function  and one that cares about consensus goods with a utility function  (where  and  are respectively the amounts spent on idiosyncratic and consensus goods). Then the result of Nash bargaining will be to maximize: . This is a Cobb-Douglas utility function and a person with that utility function will split their resources between idiosyncratic goods and consensus goods at a ratio of ​, regardless of their total level of resources.

    (This relies on the idiosyncratic goods and consensus goods having the same functional form. If instead that person’s consensus-good-valuing sub-agent valued resources linearly and their idiosyncratic sub-agent valued resources logarithmically, the result of Nash bargaining would be to maximize . For this utility function, as resources grow, more resources are spent on the consensus goods.)

    Note that the utility function produced by the Nash bargain is based on resource expenditure relative to the disagreement point (where the individual spends no resources on consensus or idiosyncratic goods). So in the utility functions above,  is not the total societal spending on the consensus good but rather the individual's spending on the consensus good. That’s not really a public good anymore, but rather a particular type of idiosyncratic good.

  23. ^

    Consider a nationalist choosing between: (a) self-sims, valued at 1 util/resource unit; (b) national benefit, valued at 0.01 util/unit; and (c) consensium, valued at 0.0001 util/unit. With 10 billion people total, 10% of whom are nationalists for the same nation, the nationalist funds (b): coordinating with 1 billion co-nationalists yields an effective multiplier of 1B × 0.01 = 10M, while coordinating with all 10 billion on consensium yields only 10B × 0.0001 = 1M. More generally, an agent prefers coordinating with a smaller group of size  on a good valued at ​ over a larger group of size  on a good valued at ​ iff ​.

  24. Show all footnotes

9

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities