Our actions and decisions clearly affect future generations. Climate change is the canonical example, but this is also true for social norms, values, levels of economic growth, and many other factors. Indeed, if we give equal weight to future individuals, it is likely that the effect of our actions on the long-term future far outstrip any short-term impacts.
However, future generations do not hold any power – as they do not yet exist – so their interests are often not taken into account to a sufficient degree. To fix this problem, we could introduce some form of representation of future generations1 in our political system. (See e.g. 1,2,3 for previous discussion.) In this post, I will consider different ways to empower future generations and discuss key challenges that arise.
Nice post!
Could you expand on what you mean by the first part of that sentence, and what makes you say that?
It seems true that only moral agents can "vote" in the sort of meaningful sense we typically associate with "voting". But it also seems like, in representing future beings, we're primarily representing their preferences, or something like that. And it seems like this doesn't really require them "voting", and thus could be done for future moral patients in ways that are analogous to how we could do it for future moral agents.
For example, you quote Paul Christiano's suggestion that we could:
It seems we could analogously subsidize liquid prediction markets for things like the results in 2045, conditional on passing X or Y policy, of whatever our best metrics are for the welfare or preference-satisfaction of animals, or of AIs whose experiences matter but who aren't moral agents. And then people could say things like "The market expects that [proxy] will indicate in that [group of moral patients] will be better off in 2045 if pass [policy X] than if we pass [policy Y]."
Of course, coming up with such metrics is hard, but that seems like a problem we'll want to fix anyway.
And perhaps, at the least, we could use a metric along the lines of "the views in 2045 of experts or the general public on the preference-satisfaction or welfare of those moral patients". Even if this still boils down to asking for the views of future moral agents, it's at least asking about their beliefs about this other thing that matters, rather than just what they want, so it might give additional and useful information. (I'd imagine this being done in addition to asking what those moral agents want, not instead of that.)
I should mention that I hadn't thought about this issue at all till I read your post, so those statements should all be taken as quite tentative. Relatedly, I don't really have a view on whether we should do anything like that; I'm just suggesting that it seems like we could do it.
Ah, that makes sense, then.
... (read more)