It seems to me that you use “intergenerational justice” and longtermism in somewhat synonymous fashion. I think I would disagree with this sentiment. Longtermism is a specific set of positions whereas I would see intergenerational justice as a more open concept that can be defined and discussed from different positions.
I also think that there are reasonable critiques of longtermism. In the spirit of your post, I hope you stay open to considering those views.
I have only read the summarybot comment but based on that I wanted to leave a literature suggestion that could be interesting to people who liked this post and want to think more about how to put a pragmatic approach to ethics into practice.
Ulrich, W. (2006). Critical Pragmatism: A New Approach to Professional and Business Ethics. In Interdisciplinary Yearbook for Business Ethics. V. 1, v. 1,. Peter Lang Pub Inc.
Abstract: Major contemporary conceptions of ethics such as discourse ethics and neocontractarian ethics are not grounded in a sufficiently pragmatic notion of practice. They suffer from serious problems of application and thus can hardly be said to respond to the needs of professionals and decision makers. A main reason lies in the tendency of these approaches to focus more on the requirements of ethical universalization than on those of doing justice to particular contexts of action – at the expense of practicality and relevance to practitioners. If this diagnosis is not entirely mistaken, a major methodological challenge for professional and business ethics consists in finding a new balance between ethical universalism and ethical contextualism. A reformulation of the pragmatic maxim (the methodological core principle of American pragmatism) in terms of systematic boundary critique (the methodological core principle of the author’s work on critical systems thinking and reflective professional practice) may provide a framework to this end, critical pragmatism.
I am wondering if you could say something about how the political developments in the US (i.e., Trump 2.0) are affecting your thinking on AGI race dynamics? It seems like the default assumption communicated publicly is still that the US are "the good guys" and a "western liberal democracy" that can be counted on, when the actual actions on the world stage are casting at least some doubt on this position. In some sense, one could even argue that we are already playing out a high-stakes alignment crisis at this very moment.
Any reactions or comments on this issue? I understand that openness around this topic is difficult at the moment but I also don't think that complete silence is all that wise either.
I don’t agree with this sentiment. At least for me I really do not see any real cost associated with being vegan that would keep me from earning more or being a better person in any meaningful way.
For example, I am pretty sure I wouldn’t work more if I ate more meat, why would I? There really doesn’t seem to be a causal pathway here. Maybe if you really crave beef and you can’t help yourself thinking about this all the time… yeah, that could be distracting and reduce your performance but I am not sure that something like this occurs all that often. Never happened to me at least.
I would argue it’s actually quite the opposite. Being vegan is normally quite a healthy lifestyle that has positive effects on health all around. Don’t underestimate the impact of having to live with the cognitive dissonance of being directly responsible for the unnecessary suffering of harmless animals.
But I guess there are different preferences and maybe you see things differently. I just wanted to flag that you are not really presenting knock down arguments here. To me it seems more like a self-justificatory move to somehow “absolve” you from doing the right thing.
Maybe I am naive but what is the cost that’s associated with not eating meat? Not having the taste of it? What motivates you to donate money to reduce animal suffering if you believe that your taste is more valuable than the life of the animal in the first place? Or are you at a point where you believe that animals matter enough to warrant some small amounts of donations but not to deprive you of their taste?
I mean, of course it’s good to donate but I don’t see why this means that you should continue the practice that you want to offset if you can help it or am I missing something?
Similarly, if I offset pollution, I do not turn around and pollute more because that would defeat the purpose?!
Reading your comments, I think we come from different perspectives when reading such a post.
I read the post as an attempt to highlight a blind spot in "orthodox" EA thinking, which simply tries to make a case for the need to revisit some deeply ingrained assumptions based on alternative viewpoints. This tends to make me curious about the alternative viewpoints offered and if I find them at least somewhat plausible and compelling I try to see what I can do with them based on their own assumptions. I do not necessarily see it as the job of the post to anticipate all the questions that a person coming from the "orthodox" perspective may come up with. Certainly, it's nice if it is well written and can anticipate some objections but this forum is not a philosophical journal (far from it).
So, what I am concerned with in your reaction is that it gives me an impression that you may be applying the same standards for people who share your "orthodox" understanding that "only sentient beings count" and those who question the viability of this understanding. You seem to take the "orthodox" understanding as given and demand that the other person makes arguments that are convincing from this "orthodox" perspective. This can be very difficult if the other side questions very fundamental assumptions of your position. There is a huge gap between noticing inconsistencies and problems with an "orthodox" framework and being able to offer viable alternatives that make sense to people looking at this through the lens of the "orthodox" framework. A seminal reading to appreciate the nature of this situation would probably be Thomas Kuhn (2012). The Structure of Scientific Revolutions: 50th Anniversary Edition.
The whole reason I commented in the first place is that I am sometimes disappointed by people down-voting critical posts that challenge "orthodoxy" but in the next breath triumphantly declare how open-minded EA is and how curiosity and critique is at the heart of the movement. "EA is an open-ended question", they say and go down-vote the post that questions some of their core assumptions (not saying this is you, but there must be some cases of this given what I have seen happen here in the forum). Isn't it in this communities best interest and stated self-understanding that it should be a welcoming place to people who are well meaning and able to articulate their questions or critiques in a coherent manner even if they go against prevailing orthodoxy? Isn't this where EA itself came from?
Moving out of the slight rant mode and trying to reply to your substantial question about practical differences. I think my previous comment and also this provide some initial directions for this. If your fundamental assumptions change, it does not necessarily make sense to keep everything else as is. In this way, it's a starting point for the development of a new "paradigm" and this can take time. For example, EA still has arguably a mostly modern understanding of "progress", which may need to be revisited in a more systemic paradigm. There are some efforts ongoing in this direction, for example, under the label of "metamodernism".
I personally also find the work of Daniel Schmachtenberger and the Civilization Research Institute quite interesting. They have a new article on this very topic that may be an interesting read: https://consilienceproject.org/development-in-progress/.
However, there are many more people active in this space. The "Great Simplification" podcast by Nate Hagens has some interesting episodes with quite a few of them. Disclaimer: I am not naively endorsing all of the content on the podcast (e.g., I don't really listen to the "frankly" episodes) but I think it provides an interesting, useful, and often inspiring window on this emerging systemic perspective. If you are not too familiar with the planetary boundaries framework there is a recent episode with Johan Rockström that discusses it in broad strokes.
I think the post was already acknowledging the difference in perspective and trying to make the case that the perspective that you are advocating for seems shortsighted from their perspective.
The key point here seems to be the consideration that is given to interconnectedness. Whereas “traditional” EA assumes stability in the Earth System and focuses “only” on marginal improvements ceteris paribus, the ecological perspective highlights the interconnectedness of “everything” and the need for a systemic focus on sustaining the entire Earth system rather than simply assuming it’s continued functioning in the face of ongoing disruption and destruction.
I think the argument is sound and does show a pretty big blind spot in “traditional” EA thinking. I think the post itself probably could have made the point in a way that is easier to digest for people with contrarian beliefs but the level of downvoting seems pretty harsh and ultimately self-defeating to me.
In terms of practical consequences, I would first of all expect more recognition of systemic perspectives in EA discourse and more openness to considering the value of ecosystems and earth systems in general. This seems worthwhile even just on instrumental grounds.
I have never said that how we treat nonhuman animals is “solely” due to differences in power. The point that I have made is that AIs are not humans and I have tried to illustrate that differences between species tend to matter in culture and social systems.
But we don’t even have to go to species differences, ethnic differences are already enough to create quite a bit of friction in our societies (e.g., racism, caste systems, etc.). Why don’t we all engage in mutually beneficial trade and cooperate to live happily ever after?
Because while we have mostly converging needs in a biological sense, we have different values and beliefs. It still roughly works out in the grand scheme of things because cultural checks and balances have evolved in environments where we had strongly overlapping values and interests. So most humans have comparable degrees of power or are kept in check by those checks and balances. That was basically our societal process of getting to value alignment but as you can probably tell by looking at the news, this process has not reached a satisfactory quality, yet. We have come far but it’s still a shit show out there. The powerful take what they can get and often only give a sh*t to the degree that they actually feel consequences from it.
So, my point is that your “loose” definition of value alignment is an illusion if you are talking about super powerful actors that have divergent needs and don’t share your values. They will play along as long as it suits them but will stop doing it as soon as an alternative more aligned with their needs and values is more convenient. And the key point here is that AIs are not humans and that they have very different needs from us. If they become much more powerful than us, only their values can keep them in check in the long run.
But what makes you think that this can be a longterm solution if the needs and capabilities of the involved parties are strongly divergent as in human vs AI scenarios?
I agree that trading can probably work for a couple of years, maybe decades, but if the AIs want something different from us in the long term what should stop them from getting this?
I don’t see a way around value alignment in the strict sense (ironically this could also involve AIs aligning our values to theirs similar to how we have aligned dogs).
I think it would be helpful to not use longtermism in this synonymous way because I think it’s prone to lead to misunderstandings and unproductive conflict.
For example, there is a school of thought called the person affecting view, which denies that future, non-existing people have moral patient hood but would still be able to have reasonable discussions about intergenerational justice in the sense of children might want to have children, etc.
In general, I wouldn’t characterize those views as any more or less extreme or flat-footed than weak forms of longtermism. I think these are difficult topics that are contentious by nature.
For me, the key is to stay open-minded and seek some form of discursive resolution that allows us to move forward in a constructive and ideally for all acceptable way. (That’s a critical pragmatist stance inspired by discourse ethics)
This is why I appreciate your curiosity and willingness to engage with different perspectives, even if it’s sometimes hard to understand opposing viewpoints. Keep at it! :)