All of Shimmy Shai's Comments + Replies

I am also curious about another thing: For me, I identified over my long 31 years of lifetime that was spent mostly behind a computer, that the 3 biggest challenges facing humankind so far are an unhealthy relationship with nature, the lack of a socio-cultural-political milieu that provides a solid guarantee of global peace (just look at with Russia now!), and finally the lack of similar on the ethical development and deployment of technology.

What do you think?

Moreover, given that I am hopefully at a point where I can make the transition from mental health... (read more)

The question I'd have about "human enhancement" with technology, is what is one's hard limit to moral goodness, and thus one's "fatedness to the evilness of relative privation of goodness as compared to another" given that we have very little such technology at present, and how can one reliably determine it?

I have for a while been thinking about this idea of "effective altruism" but have a couple of questions about it more fundamentally.

The first is purely practical - why is it that for contributions to a thing to be doing a lot of good, it must be one in which not a lot of people are working on them, specifically, are required? Ultimately, we need everyone doing good, because evil is an intolerable path for a human to live by, and one could argue that the absence of good is at least "half of evil", but that means that, if we are to approach that seriously, t... (read more)

1
Agrippa
2y
two big things: one: replaceability often nukes the utility of doing something. lets say i am gonna get a job at Redwood. there is some expected value from my outputs, but the real calculation is [expected value of my outputs] - [expected value of the outputs of who would have been hired that isn't me]. of course i'm also freeing up their time by taking the job, so there is a sort of cascade, but in many cases its between them get hired and doing not much.   two: vast majority people arent trying at all to do a lot of good, so naturally if you are, you will do things that few others are trying to do

To me this also suggests the need to develop a more robust international order that can effectively regulate and limit the development of potentially destructive technologies for military application. For example, consider how much pressure has been put on Iran and North Korea to prevent them from gaining nuclear weapons. Should we treat countries pursuing AI for clearly military aims in the same way?

Regarding the "long term stagnation" - to me this suggests you seem to be thinking of the current epoch of history as showcasing the inevitable. Yet stagnation in this sense was the norm for 200,000+ years of modern Homo sapiens existing on Earth. Hence, there is real question whether this period represents a continued given, a blip, the last hurrah before the end, or perhaps the start of a much more complex trajectory of history - perhaps involving multiple periods of rapid technological flourishing, then periods of stagnation or even decline, in various ... (read more)

Or even moreso, that we should be aiming to build a broad framework that addresses weapons and military technology as a broad class of things needing regulation of which nuclear weapons are seen as just the beginning.