Our actions and decisions clearly affect future generations. Climate change is the canonical example, but this is also true for social norms, values, levels of economic growth, and many other factors. Indeed, if we give equal weight to future individuals, it is likely that the effect of our actions on the long-term future far outstrip any short-term impacts.
However, future generations do not hold any power – as they do not yet exist – so their interests are often not taken into account to a sufficient degree. To fix this problem, we could introduce some form of representation of future generations1 in our political system. (See e.g. 1,2,3 for previous discussion.) In this post, I will consider different ways to empower future generations and discuss key challenges that arise.

35

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since: Today at 11:36 AM

Hi Tobias,

I'm glad to see CRS take something of an interest in this topic and I'm particularly happy to see some meta-level discussion of representing the interests of future generations which has been sorely missing from the longtermism space.

We are in full agreement that most extant proposals to represent future generations involve very weak institutions and often rely on tenuous political commitments. In fact, it's because political commitments are so tenuous that political institutions to represent future generations must at first be weak. Strong institutions for future generations have historically been repealed very rapidly, as Jones, O'Brien, and Ryan (2018) have argued from a couple case studies.

We are also in full agreement that there are problems of predicting the interests of future generations, and that getting more objective information about their interests is a key problem. This problem proliferates with increasingly longer timescales. This is why many of the solutions I am personally most favorable to are information interventions, such as creating research bodies like the now-defunct Office of Technology Assessment, which can distill and package extant expertise for legislative bodies, as well as posterity impact assessments, which can create strong incentives to gather more information about the future.

I find much less compelling the idea that "if there is the political will to seriously consider future generations, it’s unnecessary to set up additional institutions to do so," and "if people do not care about the long-term future," they would not agree to such measures. The main reason I find this uncompelling is just that it overgenerates in very implausible ways. Why should women have the vote? Why should discrimination be illegal?

The main long-term function that I see longtermist institutional reform, or any other kind of institutional reform playing is an institutional signalling role. There is compelling evidence that legal and political reform significantly shifts the norms and attitudes that people come to see as acceptable (Berkowitz and Walker 1967, Bilz and Nadler 2009, Flores and Barclay 2015, Tankard and Paluck 2016, 2017, Walker and Argyle 1964). Shifting laws and institutional norms credibly signals information about group attitudes to anyone who has access to information about those laws and norms. In this case, it signals that good, sensible, right-thinking people think that future generations are of great importance and that our political systems must be responsive to their interests. For this reason, there is a chicken and egg problem for institutional reform, but this chicken and egg problem is very friendly to supporters of institutional reform. Reforming institutions changes attitudes, which in turn creates the political will necessary to reform institutions further. Reformed institutions in turn create stable shelling points that prevent value drift away from core values.

For this reason, longtermist institutional reform is quite beneficial for information-gathering purposes. Representing future generations creates greater political and cultural will to gather objective information about the interests of future generations. It's an exercise in movement-building.

I don't know if you meant to narrow in on only those reforms I mention which attempt to create literal representation of future generations or if you meant to bring into focus all attempts to ameliorate political short-termism. In the latter case, it's worth noting that there are a large variety of likely causes of short-termism. Some of them are epistemic (we don't know what to do) and motivational (we lack the political will), but others are merely institutional. In these latter cases, the problem is not that we don't have enough information or will, but rather that the right information is not getting to the right people or that institutional mechanisms are preventing appropriately-motivated and informed actors from acting for the long term. These sorts of problems sometimes require different fixes, and they can sometimes be fixed simply by creating designated stakeholders who create relevant coordination points in government and have time allocated explicitly to considering the long-term. Political problems are often a problem of institutional incentives rather than of political will, and there are currently very strong incentives to focus on the short-term. I canvass many of the various causes of political short-termism in my (now rather lengthy) review on longtermist institutional design and policy.

As a classical utilitarian, I'm also not particularly bothered by the philosophical problems you set out above, but some of these problems are the subject of my dissertation and I hope that I have some solutions for you soon.

In short, I think there is reason for more optimism about longtermist institutional reform than you express here, but I am happy to have some further discussion of the problem and to see a call to consider more seriously the epistemic problems that plague such reform along with some possible solutions.

Hi Tyler,

thanks for the detailed and thoughtful comment!

I find much less compelling the idea that "if there is the political will to seriously consider future generations, it’s unnecessary to set up additional institutions to do so," and "if people do not care about the long-term future," they would not agree to such measures. The main reason I find this uncompelling is just that it overgenerates in very implausible ways. Why should women have the vote? Why should discrimination be illegal?

Yeah, I agree that there are plenty of reasons why institutional reform could be valuable. I didn't mean to endorse that objection (at least not in a strong form). I like your point about how longtermist institutions may shift norms and attitudes.

I don't know if you meant to narrow in on only those reforms I mention which attempt to create literal representation of future generations or if you meant to bring into focus all attempts to ameliorate political short-termism.

I mostly had the former in mind when writing the post, though other attempts to ameliorate short-termism are also plausibly very important.

I'm glad to see CLR take something of an interest in this topic

Might just be a typo but this post is by CRS (Center for Reducing Suffering), not CLR (Center on long-term risk). (It's easy to mix up because CRS is new, CLR recently re-branded, and both focus on s-risks.)

As a classical utilitarian, I'm also not particularly bothered by the philosophical problems you set out above, but some of these problems are the subject of my dissertation and I hope that I have some solutions for you soon.

Looking forward to reading it!

Ah, it looks like I read your post to be a bit more committal than you meant it to be! Thanks for your reply! And sorry for the misnomer, I'll correct that in the top-level comment.

Nice post!

Realistically, we can only represent future moral agents, who may not adequately consider the interests of future moral patients (such as nonhuman animals or nonbiological beings).

Could you expand on what you mean by the first part of that sentence, and what makes you say that?

It seems true that only moral agents can "vote" in the sort of meaningful sense we typically associate with "voting". But it also seems like, in representing future beings, we're primarily representing their preferences, or something like that. And it seems like this doesn't really require them "voting", and thus could be done for future moral patients in ways that are analogous to how we could do it for future moral agents.

For example, you quote Paul Christiano's suggestion that we could:

Subsidize liquid prediction markets about the results of these surveys in all future years. For example, we can bet about people in 2045’s answers to “Did we do too much or too little about climate change in 2015-2025?”
We will get to see market odds on what people in 10, 20, or 30 years will say about our current policy decisions. For example, people arguing against a policy can cite facts like "The market expects that in 20 years we will consider this policy to have been a mistake."

It seems we could analogously subsidize liquid prediction markets for things like the results in 2045, conditional on passing X or Y policy, of whatever our best metrics are for the welfare or preference-satisfaction of animals, or of AIs whose experiences matter but who aren't moral agents. And then people could say things like "The market expects that [proxy] will indicate in that [group of moral patients] will be better off in 2045 if pass [policy X] than if we pass [policy Y]."

Of course, coming up with such metrics is hard, but that seems like a problem we'll want to fix anyway.

And perhaps, at the least, we could use a metric along the lines of "the views in 2045 of experts or the general public on the preference-satisfaction or welfare of those moral patients". Even if this still boils down to asking for the views of future moral agents, it's at least asking about their beliefs about this other thing that matters, rather than just what they want, so it might give additional and useful information. (I'd imagine this being done in addition to asking what those moral agents want, not instead of that.)

I should mention that I hadn't thought about this issue at all till I read your post, so those statements should all be taken as quite tentative. Relatedly, I don't really have a view on whether we should do anything like that; I'm just suggesting that it seems like we could do it.

Hi Michael,

thanks for the comment!

Could you expand on what you mean by the first part of that sentence, and what makes you say that?

I just meant that proposals to represent future non-human animals will likely gain less traction than the idea of representing future humans. But I agree that it would be perfectly possible to do it (as you say). And of course I'd be strongly in favour of having a Parliamentary Committee for all Future Sentient Beings or something like that, but again, that's not politically feasible anytime soon. So we have to find a sweet spot where a proposal is both realistic and would be a significant improvement from our perspective.

It seems we could analogously subsidize liquid prediction markets for things like the results in 2045, conditional on passing X or Y policy, of whatever our best metrics are for the welfare or preference-satisfaction of animals, or of AIs whose experiences matter but who aren't moral agents. And then people could say things like "The market expects that [proxy] will indicate in that [group of moral patients] will be better off in 2045 if pass [policy X] than if we pass [policy Y]."
Of course, coming up with such metrics is hard, but that seems like a problem we'll want to fix anyway.

I agree, and I'd be really excited about such prediction markets! However, perhaps the case of nonhuman animals differs in that it is often quite clear what policies would be better for animals (e.g. better welfare standards), whether it's current or future animals, and the bottleneck is just the lack of political will to do it. (But it would be valuable to know more about which policies would be most important - e.g. perhaps such markets would say that funding cultivated meat research is 10x as important as other reforms.)

By contrast, it seems less clear what we could do now to benefit future moral agents (seeing as they'll be able to decide for themselves what to do), so perhaps there is more of a need for prediction markets.

I just meant that proposals to represent future non-human animals will likely gain less traction than the idea of representing future humans.

Ah, that makes sense, then.

However, perhaps the case of nonhuman animals differs in that it is often quite clear what policies would be better for animals (e.g. better welfare standards), whether it's current or future animals, and the bottleneck is just the lack of political will to do it. [...]
By contrast, it seems less clear what we could do now to benefit future moral agents (seeing as they'll be able to decide for themselves what to do), so perhaps there is more of a need for prediction markets.

This is an interesting point, and I think there's something to it. But I also tentatively think that the distinction might be less sharp than you suggest. (The following is again just quick thoughts.)

Firstly, it seems to me that we should currently have a lot of uncertainties about what would be better for animals. And it also seems that, in any case, much of the public probably is uncertain about a lot of relevant things (even if sufficient evidence to resolve those uncertainties does exist somewhere).

There are indeed some relatively obvious low-hanging fruit, but my guess would be that, for all the really big changes (e.g., phasing out factory farming, improving conditions for wild animals), it would be hard to say for sure what would be net-positive. For example, perhaps factory farmed animals have net positive lives, or could have net positive lives given some changes in conditions, in which case developing clean meat, increasing rates of veganism, etc. could be net negative (from a non-suffering-focused perspective), as it removes wellbeing from the world.

Of course, even if facing such uncertainties, expected value reasoning might strongly support one course of action. Relatedly, in reality, I'm quite strongly in favour of phasing out factory farming, and I'm personally a vegetarian-going-on-vegan. But I do think there's room for some uncertainty there. And even if there are already arguments and evidence that should resolve that uncertainty for people, it's possible that those arguments and bits of evidence would be more complex or less convincing than something like "In 2045, people/experts/some metric will be really really sure that animals would've been better off if we'd done X than if we'd done Y." (But that's just a hypothesis; I don't know how convincing people would find such judgements-from-the-future.)

Secondly, it seems that there are several key things where it's quite clear what policies would be better for future moral agents, and the bottleneck is just the lack of political will to do it. (Or at least, where what would be better is about as clear as it is for many animal-related things.) E.g., reducing emissions; doing more technical AI safety research; more pandemic preparedness (i.e., I would've said that last year; maybe now things are more where they should be). Perhaps the reason is that these policies relate to issues where future moral agents won't "be able to decide for themselves what to do", or at least where it'd be much harder for them to do X than it is for us to do X.

Perhaps the summary of these ideas is that:

  • This sort of prediction market might be useful both for generating information and for building political will / changing motivations.
  • That might apply somewhat similarly both for what future moral agents would want and for what future moral patients would want
  • But that relies on getting the necessary support to set up such prediction markets and have people start paying attention, which might be harder in the case of future moral patients, as you note
Curated and popular this week
Relevant opportunities