The question I'd have about "human enhancement" with technology, is what is one's hard limit to moral goodness, and thus one's "fatedness to the evilness of relative privation of goodness as compared to another" given that we have very little such technology at present, and how can one reliably determine it?
I have for a while been thinking about this idea of "effective altruism" but have a couple of questions about it more fundamentally.
The first is purely practical - why is it that for contributions to a thing to be doing a lot of good, it must be one in which not a lot of people are working on them, specifically, are required? Ultimately, we need everyone doing good, because evil is an intolerable path for a human to live by, and one could argue that the absence of good is at least "half of evil", but that means that, if we are to approach that seriously, then we will necessarily have lots of people working on lots of issues.
But the second is more philosophical, and kind of related to that "we need everyone doing good" and "evil is intolerable": does this "effective altruism" not merely constitute a moral decision-making method, but also a moral judgment to pass on other people in that if you don't help as many people as another because of what you don't have (financial, talent, circumstances, etc.), then you are a more evil or less good person, even if you are still making the best choices with what you do have? If so, is that sort of "relative evil" tolerable? But yet, if so, then why consider it "evil" at all, which inherently must imply (for it to be even morally meaningful - that is, to have relevance to how we should and should not act) a certain level of intolerance?
The reason I ask these is that for a while now I have been dogged by feeling like I am an evil person - and not being recognized and judged accordingly - because my mind seems to naturally operate on some framework that seems rather broadly along these lines, and seems to invite comparisons based on total utility generation with attendant self-flagellation.
To me this also suggests the need to develop a more robust international order that can effectively regulate and limit the development of potentially destructive technologies for military application. For example, consider how much pressure has been put on Iran and North Korea to prevent them from gaining nuclear weapons. Should we treat countries pursuing AI for clearly military aims in the same way?
Regarding the "long term stagnation" - to me this suggests you seem to be thinking of the current epoch of history as showcasing the inevitable. Yet stagnation in this sense was the norm for 200,000+ years of modern Homo sapiens existing on Earth. Hence, there is real question whether this period represents a continued given, a blip, the last hurrah before the end, or perhaps the start of a much more complex trajectory of history - perhaps involving multiple periods of rapid technological flourishing, then periods of stagnation or even decline, in various patterns and ways and not to mention also geographically.
One thing to note about history or culture is that there are no inherent drivers to "greater complexity" - indeed, from an anthropological point of view one can question just what that means. It is, in this regard, much like biological evolution outside the human realm. In both biology and anthropology, there is and should be a strong skepticism toward any claim of a teleology or a linear narrative.
That said, I would still support that there is a distinction between a long term stagnation and extinction even if the former is definitely not something one should rule out - and that's that in the latter case, there is absolutely no recovery: while it's possible another intelligent toolmaking species could evolve, looking at the future of geological history which is potentially much more regular, the gradual heating of the Sun suggests that we could potentially be Earth's only shot. It's like the difference between life imprisonment, and the death penalty. The former is not fun at all, but there's a reason there's so much resistance to the latter, and it's that key point of irreversibility.
I am also curious about another thing: For me, I identified over my long 31 years of lifetime that was spent mostly behind a computer, that the 3 biggest challenges facing humankind so far are an unhealthy relationship with nature, the lack of a socio-cultural-political milieu that provides a solid guarantee of global peace (just look at with Russia now!), and finally the lack of similar on the ethical development and deployment of technology.
What do you think?
Moreover, given that I am hopefully at a point where I can make the transition from mental health recovery and college to finally a "proper" career and to break free of the shackles of the computer screen, what should I be aiming at if I want to maximize the utility value on all these fronts, and why should I accept that, and why should I accept the evidence, and where can I find countermanding arguments as to those whys?