Holden Karnofsky

Wiki Contributions

Comments

Digital People FAQ

If we have advanced AI that is capable of constructing a digital human simulation, wouldn't it also by proxy be advanced enough to be conscious on its own, without the need for anything approximating human beings? I can imagine humans wanting to create copies of themselves for various purposes but isn't it much more likely for completely artificial silicon-first entities to take over the galaxy? Those entities wouldn't have the need for any human pleasures and could thus conquer the universe much more efficiently than any "digital humans" ever could.

It does seem likely to me that advanced AI would have the capabilities needed to spread through the galaxy on its own. Where digital people might come in is that - if advanced AI systems remain "aligned" / under human control - digital people may be important for steering the construction of a galaxy-wide civilization according to human-like (or descended-from-human-like) values. It may therefore be important for digital people to remain "in charge" and to do a lot of work on things like reflecting on values, negotiating with each other, designing and supervising AI systems, etc.

If we get to a point where "digital people" are possible, can we expect to be able to tweak the underlying circuitry to eliminate the concept of pain and suffering altogether, creating "humans" incapable of experiencing anything but joy, no matter what happens to them? Its really hard to imagine from a biological human perspective but anything is possible in a digital world and this wouldn't necessarily make these "humans" any less productive.

"Tweaking the underlying circuitry" wouldn't automatically be possible just as a consequence of being able to simulate human minds. But I'd guess the ability to do this sort of tweak would follow pretty quickly.

As a corollary, do we have a reason to believe that "digital humans" will want to experience anything other than 24/7 heroin-like euphoria in their "down time", rather than complex experiences like zero-g? Real-life humans cannot do that as our bodies quickly break down from heroin exposure, but digital ones won't have such arbitrary limitations.

I think a number of people (including myself) would hesitate to experience "24/7 heroin-like euphoria" and might opt for something else.

New blog: Cold Takes

Thanks, I agree it's not ideal, but haven't found a way to change the color of that button between light and dark mode.

New blog: Cold Takes

No need to follow any unusual commenting norms! The "cold" nature of the blog is due to my style and schedule, not a request for others.

All Possible Views About Humanity's Future Are Wild

I'm not sure I follow this. I think if there were extraterrestrials who were going to stop us from spreading, we'd likely see signs of them (e.g., mining the stars for energy, setting up settlements), regardless of what speed they traveled while moving between stars.

All Possible Views About Humanity's Future Are Wild

I think your last comment is the key point for me - what's wild is how early we are, compared to the full galaxy population across time.

All Possible Views About Humanity's Future Are Wild

I think it's wild if we're living in the century (or even the 100,000 years) that will produce a misaligned AI whose values come to fill the galaxy for billions of years. That would just be quite a remarkable, high-leverage (due to the opportunity to avoid misalignment, or at least have some impact on what the values end up being) time period to be living in.

All Possible Views About Humanity's Future Are Wild

I'm not sure I can totally spell it out - a lot of this piece is about the raw intuition that "something is weird here."

One Bayesian-ish interpretation is given in the post: "The odds that we could live in such a significant time seem infinitesimal; the odds that Holden is having delusions of grandeur (on behalf of all of Earth, but still) seem far higher." In other words, there is something "suspicious" about a view that implies that we are in an unusually important position - it's the kind of view that seems (by default) more likely to be generated by wishful thinking, ego, etc. than by dispassionate consideration of the facts.

There's also an intuition along the lines of "If we're really in such a special position, I'd think it would be remarked upon more; I'm suspicious of claims that something really important is going on that isn't generally getting much attention."

I ultimately think we should bite these bullets (that we actually in the kind of special position that wishful thinking might falsely conclude we're in, and that there actually is something very important going on that isn't getting commensurate attention). I think some people imagine they can avoid biting these bullets by e.g. asserting long timelines to transformative AI; this piece aims to argue that doesn't work.

All Possible Views About Humanity's Future Are Wild

Ben, that sounds right to me. I also agree with what Paul said. And my intent was to talk about what you call temporal wildness, not what you call structural wildness.

I agree with both you and Arden that there is a certain sense in which the "conservative" view seems significantly less "wild" than my view, and that a reasonable person could find the "conservative" view significantly more attractive for this reason. But I still want to highlight that it's an extremely "wild" view in the scheme of things, and I think we shouldn't impose an inordinate burden of proof on updating from that view to mine.

The Duplicator: Instant Cloning Would Make the World Economy Explode

(Response to both AppliedDivinityStudies and branperr)

My aim was to argue that a particular extreme sort of duplication technology would have extreme consequences, which is important because I think technologies that are "extreme" in the relevant way could be developed this century. I don't think the arguments in this piece point to any particular conclusions about biological cloning (which is not "instant"), natalism, etc., which have less extreme consequences.

Digital People Would Be An Even Bigger Deal

It seems very non-obvious to me whether we should think bad outcomes are more likely than good ones. You asked about arguments for why things might go well; a couple that occur to me are (a) as long as large numbers of digital people are committed to protecting human rights and other important values, it seems like there is a good chance they will broadly succeed (even if they don't manage to stop every case of abuse); (b) increased wealth and improved social science might cause human rights and other important values to be prioritized more highly, and might help people coordinate more effectively.

Load More