Epistemic status: Speculation, I am not confident in my views here and I skim over a lot of issues.
TL;DR: I claim that accelerating the development of "Whole Brain Emulation", such that it arrives before human level AI, is an intervention that would reduce the risks posed by AI.
Normal AI development consists of making AI systems more generally capable, interesting or useful. By "differential technological development" with regard to AI, people usually mean instead making technical progress on systems that are:
I think there are broader ways in which technology could be developed differentially. One of these in particular could conceivably put us in a better position with AI risk.
Whole brain emulation is the idea of:
By doing this properly, you would have a digital person who would have many advantages over a biological person. Particularly relevant to this post are the ways digital people would be more powerful/productive:
- They can be freely copied
- They can be run at greater or lesser speeds
- Total control over their virtual environment give them greater productivity
- Changing their brain to improve it is easier in a digital environment than in biology
- Potentially, digital people could have a high quality of life on much lower wages than biological people, so their labour could be cheaper
"True" AI is dangerous because it is powerful and potentially lacks human values.
Digital people are more powerful than biological humans, but retain human values. We can trust their judgement as much as we trust any humans.
I claim that a world with substantial numbers of digital people would have lower risk from "true" AI than our current world. I have two primary arguments for this, and a couple of secondary arguments.
Arguments in favour
It's easier to have a digital person in the loop
Digital humans would be much cheaper to query than biological humans. This is because:
- They run at faster speeds
- Skilled people can be copied to deal with many parallel queries
- Digital people have may have lower wages than biological people
This means that several aspects of safe(r) AI development become cheaper:
- When deployed, AIs could refer to humans before performing actions - humans can be "in the loop" for more decisions
- The threshold for investigating strange behaviour can be lowered in systems being tested. We can tolerate more false positives
- Training can be overseen by humans, and training results scrutinised by more humans.
We could in theory do a lot of these things with biological humans too, but it is much more expensive. So digital people make safe(r) development of AI cheaper relative to our current world.
Digital people have many advantages over biological people. AIs have all the advantages of digital people, plus others (largely consisting of being optimised more strongly for productive tasks and eventually being more generally intelligent than humans).
A world with digital people is at less of a disadvantage to AI systems in power terms than our current world. All else being equal, having less of a power disparity seems likely to lead to better outcomes for humans.
AI less useful -> less AI development
In a world with digital people, AI is somewhat less useful. Many of the biggest advantages of AI systems are also possessed by digital people. Therefore, there is less incentive to invest in creating AI, which might mean it takes longer. I will assume for this argument that slowing down AI development makes its eventual development safer.
A world with digital people is used to policing compute and dealing with hostile compute based agents
Like biological people, digital people will not always behave nicely. The world will worry more about criminals or spies who are able to think very quickly and in parallel about how to break security systems, commit crimes, copy themselves to unauthorised places etc. And such a world will probably develop more countermeasures against such criminals than we have today (for example by having digital people available to police security systems at high speed).
The world could also be more used to policing the use of compute resources themselves. In theory, digital people could be enslaved or cheated out of useful information/trade secrets in a private server. Many digital people would worry about this happening to them, and there could be a push to ensure that servers with enough compute to run digital people are well policed.
These two factors could plausibly lead to a world where large computer systems are more secure and better policed, which lowers the risk posed by AI.
Of course, there are arguments against this view!
Digital people lead to AI
Technologies required to make digital people may directly advance AI. For example, if we understand more about neuroscience and the human brain, that may lead to the discovery of better algorithms to be used in the development of AI. Some AI research is already partially informed by biology.
Digital people require a lot of compute resources, and so does AI. If creating digital people led to us creating lots more compute, this could be bad as well.
Growth would be much faster in a world with many digital people. Even if we grant that there is less interest in AI as a % of the economy, there are more total resources going towards it - so AI development could be faster in such a world.
Relatedly, there is still considerable incentive to develop AI quickly. Digital people can't be used in the same ways for ethical & practical reasons (i.e. they need breaks and are not optimised for the kind of work the marketplace demands)
Digital people may bring extremely rapid social change, leaving us in a worse position politically to deal with AI than if we had remained more stable.
What does this imply?
In my view, a world with digital people is a world with a lower risk from AI compared to a world without them.
If this view were shared by others, we could attempt to speed up development of the necessary technologies for digital people to be created. Progress on creating digital worms hasn't been going so great, but this seems at least partially due to a lack of funding and effort. Someone could setup/fund an organisation to fund research in the necessary technologies and speed them along.
If I were convinced of the opposite view, it's difficult to see what to do other than discouraging new funding of the relevant technologies.
Similar arguments by Carl Shulman & others here: https://www.lesswrong.com/posts/v5AJZyEY7YFthkzax/hedging-our-bets-the-case-for-pursuing-whole-brain-emulation#comments
Broader considerations by Robin Hanson here: https://www.overcomingbias.com/2011/12/hurry-or-delay-ems.html
In case you missed it, Holden's post on Digital People: https://www.cold-takes.com/how-digital-people-could-change-the-world/