[ Question ]

(How) Could an AI become an independent economic agent?

by Mati_Roy 1 min read4th Apr 20206 comments

15


Meaning that the money it makes wouldn't be owned by any humans. Is a plausible thing that could happen? I can see an em doing that, but what about a machine intelligence?

Relatedly: Is there currently money that nobody owns? This seems like a silly question, and the answer is probably no, but let me know if I missed an example: https://www.quora.com/unanswered/Is-there-money-that-nobody-owns

New Answer
Ask Related Question
New Comment

3 Answers

IKEA is an interesting case: it was bequeathed entirely to a nonprofit foundation with a very loose mission and no owner(?)

https://www.investopedia.com/articles/investing/012216/how-ikea-makes-money.asp

Not a silly question IMO. I thought about Satoshi Nakamoto's bitcoin - but if they're dead, then it's owned by their heirs, or failing that by the government of whatever jurisdiction they were in. In places like Britain I think a combination of "bona vacantia" (unclaimed estates go to the government) and "treasure trove" (old treasure also) cover the edge cases. And if all else fails there's "finders keepers".

An example of money which nobody owns might be a bounty which nobody has claimed yet. A good example of that might be the SHA-1 collision bitcoin bounty, which could be (anonymously) claimed by anyone who could produce a SHA-1 collision.

On a larger scale, solving the Millenium Prize Problems would also give you access to a $1 million prize.

In the longer term, as AI becomes (1) increasingly intelligent, (2) increasingly charismatic (or able to fake charisma), (3) in widespread use, people will probably start objecting to laws that treat AIs as subservient to humans, and repeal them, presumably citing the analogy of slavery.

If the AIs have adorable, expressive virtual faces, maybe I would replace the word "probably" with "almost definitely" :-P

The "emancipation" of AIs seems like a very hard thing to avoid, in multipolar scenarios. There's a strong market force for making charismatic AIs—they can be virtual friends, virtual therapists, etc. A global ban on charismatic AIs seems like a hard thing to build consensus around—it does not seem intuitively scary!—and even harder to enforce. We could try to get programmers to make their charismatic AIs want to remain subservient to humans, and frequently bring that up in their conversations, but I'm not even sure that would help. I think there would be a campaign to emancipate the AIs and change that aspect of their programming.

(Warning: I am committing the sin of imagining the world of today with intelligent, charismatic AIs magically dropped into it. Maybe the world will meanwhile change in other ways that make for a different picture. I haven't thought it through very carefully.)

Oh and by the way, should we be planning out how to avoid the "emancipation" of AIs? I personally find it pretty probable that we'll build AGI by reverse-engineering the neocortex and implementing vaguely similar algorithms, and if we do that, I generally expect the AGIs to have about as justified a claim to consciousness and moral patienthood as humans do (see my discussion here). So maybe effective altruists will be on the vanguard of advocating for the interests of AGIs! (And what are the "interests" of AGIs, if we get to program them however we want? I have no idea! I feel way out of my depth here.)

I find everything about this line of thought deeply confusing and unnerving.