Samuel Shadrach

Last updated: Oct 2021

Studying BTech at IIT Delhi. Check out my post history for topics of interest. Spent 18 months deep into cryptocurrency and DeFi before I came here.

Wiki Contributions

Comments

Can human extinction due to AI be justified as good?

Yup the punishment point is definitely valid.

I was just assuming "beings helping their clones" intrinsically is morally valuable activity if each being has moral status , and answering based on that.

Samuel Shadrach's Shortform

I don't actually share the intuition on more people being better. But I can totally see there are people who do. But yes as you say it can be efficient over long-term even for people who really want larger populations.

You're right it'll take atleast 3-4 generations to properly happen assuming we don't kill people. So some existential risk is not avoided, but we will avoid those risks which are created say 2 generations in the future. 

> I think you'd end up with the population evolving resistance to any memetic tools used to encourage population decline.

Why do you feel this?

Prioritization Research for Advancing Wisdom and Intelligence

The disadvantage is literally "here is a new thing that helps few people become more productive and make more money, so it's gonna be captured by evil governments or capitalists". So the disadvantage seems not orthogonal to the advantage; it is a problem because it is so beneficial. This can be said about literally any tech.

Peter Wildeford's Shortform

To be honest, intuiting what a human being in the 1600s would have thought about anything seems like a non-trivial endeavour. I find it hard to imagine myself without the current math background I have. Probability was just invented, calculus was just invented. Newton had just given the world a realist mechanical way of viewing the world, except idk how many people thought in those terms because philosophical background was lacking too. Nietzche, Hume, Wittgenstein, none of them existed.

One trends that may nevertheless have been foreseeable would be the sudden tremendous importance of scientists and science - in both understanding and reshaping how the world works. And general importance of high-level abstractions, rather than just practical engineering knowledge that existed at the time. People knew architecture and geometry but idk how many people realised the general-purpose theorems of geometry are actually useful - and not just what helps you build building #48. Today we take it as matter of fact that theorems are done with symbols not specifics, all useful reasoning is symbolic and often at a high level of abstraction. Idk if people (even scientists) had such clear intuition then.

Samuel Shadrach's Shortform

In favour of population reduction
----


Has anyone in EA put forth arguments in favour of reducing population size by having less children? Either out of pure individual choice or with incentives from the state.

Consider a population with 100 million versus one with 7 billion. Some thoughts:

  1. Solving coordination problems is hugely important to our long-term survival.
  2. Solving coordination problems is harder the more people you have. We don't have global governance yet. And we have principal-agent problems at every level of government, be it community, district, national or international. Smaller population will be lot more coordinated.
  3. Smaller population (of 100M people) does not have significantly higher odds of extinction than larger population (of 7B people). Which means both can eventually create large number of offspring at some point in the future if desired.
  4. We haven't even solved coordination with 100M people yet, atleast we'll get a chance to try

Cons:

  1. Less number of people to do anything - be it scientific innovation or think about coordination problems
  2. How to transition to this society
Can human extinction due to AI be justified as good?

Very interesting point, I have some thoughts on it, let me try.

First some (sorta obvious) background:
Animals were programmed to help each other because species that did not have this trait were more likely to die. Then came humans who not only were programmed to help each other, they also used high-level thinking to find ways to help each other. Humans are likely to continue helping each other even once this trait stops being essential to our survival. This is not guaranteed though.*

It is possible that in early stages, the AI will find its best strategy for survival is to create a lot of clones. This could be on the very same machine, on different machines across the world, on newly built machines in physically secure locations, or even on newly invented forms of hardware (such as those described in https://en.wikipedia.org/wiki/Natural_computing). It is possible that this learned behaviour persists even after it is not essential to survival.

Although there is also a more important meta-question imo. Do we care about "beings who help their clones" or do we care about "beings who care about their type surviving, and hence help their clones"? If the AI for instance decides that perfect  independent clones are not the best strategy for survival, and instead it should grow like a hydra or a coral - shouldn't we be happy for it to thrive in this manner? A coral-like structure could be one where all the subprocesses run mostly independently, but yet are not fully disconnected from a backbone compute process. This is in some ways how humans grow. Even though each human individuals is physically disconnected from other individuals (we don't share body parts), we do share social and learning environments that critically shape our growth. Which makes us a lot closer to a single collective organism than say bacteria.

*This reason this will be stable even when it is not essential to survival is simply because there is no strong reason for it to change. One reason this could change is an evolutionary pressure in a different direction. Another reason this could change is randomness. Nazism as a dangerous meme could be an example. Nazis stopped caring about and chose to kill Jews knowing full well that their own survival was still guaranteed. A third reason could be sudden infusion of entropy in how our values drift. For instance if we invented ability to do neurosurgery on ourselves and change our own values - we would be able to rapidly alter our own values (such as deleting the "help others" instinct) much faster than an evolutionary process could.

Can human extinction due to AI be justified as good?

Thanks so much for linking this paper, looks like it already mentions everything I've mentioned in this post, and more.

Can human extinction due to AI be justified as good?

Valid but I'm suggesting an even stronger claim - that self-destruction is better than no destruction, rather than just other forms of self destruction :)

Can human extinction due to AI be justified as good?

Thanks for your response. Not sure I understood all of it but I'll try :p

If "creating a life" is indeed as simple as copying software onto a new processor core and running it, then the limiting resource to how many beings there can be is the number of processor cores. It shouldn't particularly matter whether the software is AI-like or a human upload. (Atleast for a utilitarian who doesn't distinguish between the two).

I'm not very hopeful humans at current intelligence level have capacity to shape the growth of superintelligent AI after it has been created. Even if both live in the same processor - as long as the intelligence  gap between us is huge. One possibility is that human minds become superintelligent by running with so much compute power. But then these minds would seem alien too - to unintelligent minds like you and I today, just as the AI would seem alien to us. It isn't immediately obvious to me why one of them would seem more relatable or deserving of moral status. Keen to discuss though.

(ii) seems trivially true to me - AI will outcompete humans at finding ever-efficient ways to manufacture more processor cores.

I also don't see human existence as outside of digital substrate as a neutral thing - humans consume a lot of the earth's resources and obstruct even more. Maximally extracting these resources to produce processor cores requires restricting any human existence outside of the processors, i.e., you and I.



Sorry if I have missed some of your points, feel free to bring them up.

Load More