For several years, I have considered myself a longtermist and I find the ITN case for improving how the far future goes to be compelling. However, I believe I have encountered a structural problem with how this is typically applied that may significantly undermine the good longtermists are trying to do. In short, I fear that even a well-meaning and effective attempt to improve the far future, even one that achieves the object-level good it was intended to, can be unjust at a meta-level. This post explains the problem and discusses my initial thoughts on how to solve or mitigate it, though it doesn’t represent a serious research effort, and I have no training in philosophy. After a quick search, I don’t believe this subject has been covered elsewhere, but feel free to refer me to other sources in the comments if this discussion is less novel. It is also worth flagging that my claim, at a distance, might seem similar to those made by Torres but is entirely distinct.
Almost everyone cares about making the future go well. For the sake of discussion, let’s operationalise “care” as a deliberate effort to affect the course of future events. For most people, the graph of caring vs future event date would tail off severely before the life expectancies of their grandchildren conclude. My argument does not rest on this being strictly true or on a lack of exceptions. I recognise, for example, that environmental concerns motivate many people, and these concerns extend far into the future. Nor do I require an assumption that most people care in selfish ways. Simply, I need to establish that longtermists have chosen to care about events much further into the future than the wider population.
This basic model needs to be expanded to include the possibility of locking in a change to the future trajectory of what you care about. If this were not possible, it would be irrational to care about anything other than what is imminent; you would have as little control over the future as the past. For the moment, let’s assume that the capacity to lock in trajectory changes is evenly shared by all people; this would undermine my argument more than the uneven distribution we observe in reality. Similarly, let’s not bring considerations of altruistic impact into the discussion just yet. We would expect the typical person to have a greater effect than a longtermist upon events within one or two lifetimes, but much, much less effect on the far future. Eventually, that future will become the present. Assuming there are humans around to see it, the events which todays longtermists hope to affect will be within the typical person’s care horizon. The neartermist general public of the future might find that the trajectory changes locked in by our contemporaries are incompatible with what they care about. I’m sure the reader can think of well-meaning technological or political changes locked in centuries ago that many people today lament. This presents an asymmetry in choice and consent: longtermists can choose to accept having less sway over the near future, but the people whose lives they affect have no choice in inheriting the consequences of longtermist efforts.
Constraining this to within a lifetime from the present day, a similar framing might follow the advent of transformatively powerful AI. I don’t mean to duplicate the discussion on different AI threat/takeover models here, but there is an appreciable chance that human agency will matter much less after this capability level is reached. There might be a window, measured in years or a few decades, in which to lock in trajectory changes as readily as we can now. If so, it won’t be future generations judging our actions in hindsight. Present friends, relatives, and countrymen will judge each other for the part they played in the transformation. The electorates of democracies will demand that their governments prove how the population’s interests were represented. Furthermore, this is precisely the sort of event that a longtermist might care about affecting; I feel implicated in this. Regrettably, a majority of the population does not yet realise the full scope of the game we’re playing in preparing for that transformation. There’s a reason “AGI-pilled” has entered the vernacular: it’s a genuinely useful term. Again, all else equal, longtermists may have a disproportionately greater effect on the lives of the general population in 2070 than vice versa.
[Of course, all else is not equal. It’s not central to this thread of reasoning, but I must note that essentially every card-carrying longtermist is far, far more empowered in their ability to affect future events than Joe Public.]
When I noticed this framing, a shadow of colonialism came with it. Noble though I think my intentions are, I have power over others through our shared future and plan to use it. Despite those noble intentions, some (though not all) standing to be affected would vehemently object. Here, the future feels analogous to how European colonists viewed North America in centuries past:
- It is a vast, desirable resource.
- Other people hope to use that resource, but for different ends. They’re not playing the same game.
- Their game is less dominant than mine. If I claim the resource, they won’t be able to stop me.
- [Without intending to unpack this fully here,] I believe that playing my game would be better for them; playing my game would sacrifice much of what they care about.
A knee-jerk reaction to this frame might be to set about trying to AGI-pill the population. Perhaps this is a good idea for other reasons, but it does not alleviate this philosophical tension. It won’t grant them sovereignty over their share of the future; it can’t protect their freedom to pursue what is important to them. Conquering America wouldn’t have been okay if Christopher Columbus had met with the native chieftains and told them his people would seek to subjugate theirs. Nor if he offered them a few rifles to emulate, tipped them off about smallpox, and gave them a 200-year grace period. Fair warning doesn’t excuse colonisation.
Another reaction one might have, if they accept this frame, is to forsake longtermism entirely. This doesn’t seem wise either. Firstly, to wipe all colonial undertones from your planning, you’d need to do so on a timescale so short that none of those affected by it are making their own plans on shorter horizons. This seems tantamount to giving up on planning entirely. More pragmatically, if everyone morally motivated enough to care about colonial implications in longtermism chose to stop practising it, the future would be handed on a platter to those who aren’t moral. I expect the outcome would be broadly dreadful in the eyes of an altruist, no matter what kind of altruism you care about.
What should one do with this? My work-in-progress conclusion is that longtermist theories of impact should focus on preserving the option value available to our future selves and coming generations. This is implicit in how many longtermists approach their work, but I think it should be explicit and override theories aimed to bring about improvements according to their specific worldview. At the very least, the risk of colonising future trajectories should be taken seriously. I’m not confident in this conclusion, as it seems to preclude anyone from ever locking in how the world should be. It feels obvious that there are some bad things we should want to lock out forever: Tuberculosis, battery farming, Morris Dancing. How should we trade off optionality against moral burdens like these? We can all agree that humanity has committed many atrocities in the past that were considered acceptable at the time. How confident can we be that we aren’t preserving the options for this to hold in the future, also? When would the drive to preserve optionality end? Without an end prescribed by this philosophy, it requires the Long Reflection to go on forever; that MacAskill’s Viatopia is actually the final destination after all. I cannot currently convince myself otherwise without endorsing temporal colonisation, but this has the air of the Repugnant Conclusion, and I’m uncomfortable with it.
