Hide table of contents

Introduction: Economics, the curse of dimensionality and reinforcement learning

Theoretical economics is the science of interaction among optimizing agents. Before the Cambrian explosion in reinforcement learning (RL), optimization was a hard mathematical problem: typically, when a dynamic optimization problem included more than five or six free variables, it became intractable: this is named “the curse of dimensionality”. For reasons I have never grasped, quantitative economists almost never surrender the optimizing behavior of economic agents, and consequently often they mutilate and twist the economic realism of the models to keep the tractability of optimal behavior at all costs.

Fortunately, optimization is no longer hard nor expensive. RL allows to populate any economic world we can imagine with superhuman optimizers. The painful algebra of Benveniste-Scheinkman conditions, envelope theorems and perturbative methods is now a barbaric relic. A Brave New World of modelling freedom opens under our eager eyes.

Now, cheap optimization does not mean that economics is no longer interesting. Economists still have to develop interactive settings (let’s name them “games”) and compare the equilibrium (=ergodic distribution) properties of the game under consideration with some real world phenomenon of interest. 

This introduction has a single goal: to direct the attention of the reader into a critical detail in the development of Artificial Intelligence: intelligence is not only a property of the optimizing kernel of a system: it emerges in the interaction between optimizing agent and the world it inhabits

The Moravec hypothesis and virtual reality

This has been well known since the 1990s. In “Mind Children”, the AI pioneer Hans Moravec provided both an explanation for the failure of symbolist AI and a road map for AI development. Moravec suggested that you cannot make AI in a symbolic world: to develop recognizably intelligent behavior you need a complete immersion in reality: Moravec, consequently suggested that robotics had to be developed before AI was created. 

Now, robots are too expensive for the kind of massive training that underlie in RL. But virtual worlds are cheap to make and populate. Virtual reality and Artificial Intelligence are dual technologies that have to be developed concurrently. If I had to value the relevant assets for AI development, the Open AI or Alpha routines and datasets would not be more important than the virtual world development tools owned by (v.g.) Epic Games.

 

Review of canonical AI risk research from the Moravec hypothesis perspective

After my previous note on the marginal contribution of AI-risk to total Existential Risk, I decided to review the canonical literature on AI risk. I read the Eliezer Yudkowsky FAQ, and two additional papers (“The Alignment Problem from a Deep Learning Perspective” and “Is Power-Seeking AI an Existential Risk?”) that were kindly suggested by Jackson Wagner

I had not read before about AI risk because I expected a very technical literature, only suitable for AI developers. This is not the case: if you know dynamic macroeconomics (in particular dynamic programming) and you have been a heavy science fiction reader the main arguments in the papers above are not only accessible but also familiar

And those arguments are simple and persuasive: First, intelligence is what gives us our ecological supremacy, and by growing an artificial intelligence we risk losing it. Secondly, we “grow”, do not “design” AI, and we really do not understand what we are building. Finally, any truly intelligent agent will tend to accumulate power as an intermediate goal to deploy that power for its final goals.

I find those arguments persuasive, but unspecific: AI is by far the most uncertain technology ever produced: currently, we don’t know even if AGI is feasible (a new AI winter is perfectly possible), and if it is feasible, it can be heaven or it can be hell. In my view, long chains of reasoning about superintelligence tend to be mainly unreliable, given the massive uncertainty involved.

But the main problem with catastrophist AI arguments is that current AI tools are far away from AGI level by construction. Let’s take the crown jewel: chat GPT. Chat GTP does not “understand” what it says, because it has been mainly trained with texts. The universe chat GPT inhabits is made of words, and for chat GTP words are not related to their external world referents (because chat GTP does not live in the physical world). Words, for chat GTP are linked and clustered to other words. For chat GTP Language is a self-referential reality, and it is an incredible predator in that ocean made of interlinked words, not in the bio-physical reality made of flows of energy, material cycles and gene pool competition.

Chat GTP is not as alien as a giant squid, but far more: it has not been even trained for self-preservation. Its universe and goals are totally orthogonal to ours.  All the AI systems developed so far are extremely specific and no matter how powerful is the underlining optimizing/agentic technology, they live in very constrained realities with goals idiosyncratic to those realities. Currently, AI models are not as animals, but only as specific brain tissues. 

Until AIs are immersed in the real world or in a virtual world designed for realism, and their goals are substantially based on that realist virtual world, AGI is not close (no matter how powerful is the core pseudo-neural technology), and existential AI-risk is too low to measure.

The relative paucity of the current alignment literature derives from the reality that we are too far away from AGI and its real technical challenges. We cannot “solve” the alignment problem in a blackboard; AI development and the risk control measures to deal with it shall be developed in parallel. In fact, there is not any “alignment problem” to be “solved”. There is an AGI development problem where (among other challenges) AI existential risk shall be monitored and addressed.

Any “pause” in the current stage of AI development would be obviously useless, at least if you are interested in the “controlled development” of AGI. 

All together now

Now, I want to summarize the results of my little trip into AI risk. 

First of all, if feasible, AI is an extremely hard to ban technology. There is nothing as the “enrichment” bottleneck that makes nuclear proliferation a difficult industrial challenge. A consequential ban on AI development would imply worldwide draconian restrictions on the research and use of IT. There shall be an extremely strong reason for such a massive decision. But AI alignment literature is not truly technical. It is mainly based in high level visions, philosophical positions and other non-operational, non-technical arguments. 

If your assessment of risk (even informed by that kind of arguments) is extremely high, you can ask for a complete ban of the technology. But in my view, a total ban is impossible, and were it possible, it would imply a massive slowdown of technological progress in general.  Now, without progress we are left in age of acute nuclear war risk with nothing else than our primitive social systems and some environmental and social crises to deal with. 

Nuclear war is not existential in the “one-off” sense: even a NATO-Russia full exchange in the worst nuclear winter case would not kill everybody. But what kind of societies would be left after the first major nuclear war? Military aristocracies, North Korea like totalitarian regimes, large tracts of Somalian anarchy awaiting to be invaded by their imperialist neighbors, etc. Nothing else can keep political coherence after such a shock.

To simplify, suppose one thousand years are needed to recover from a major nuclear war, and (given the apparent intractability of the “human alignment” problem) a major nuclear war happens every 150 years. Then Humanity returns to a new kind of Malthusian trap (more specifically, a nuclear fueled Hobbesian trap). In reality I don’t expect a post nuclear war world to be one of one thousand years of recovery and then a major nuclear war (the “Canticle for Leibowitz” typical story), but more a world of totalitarian militarism with frequent nuclear exchanges and the whole society oriented for war. At some point, if AGI is possible, some country will develop it, with the kind of purpose that guarantees it to be Skynet. We have been lucky enough to chain 77 years in a row without a nuclear war. In my view the pre-nuclear war Mankind (more specifically, the democratic countries) is the best suited to develop a benefic AGI.  

Consequently, an AI ban is probably impossible and would be counterproductive. A pause on AI research would be too premature to be useful: we are still far away from AGI, and AI research and AI alignment efforts shall run in parallel. AI alignment shall not be an independent effort, but a part of the AI development. As long as AI alignment researchers do not have more substantial results, they are not legitimized to regulate (far less pause) an infant industry still far away from being risky and that can be bring either extinction or salvation to Mankind

6

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 5:22 AM

Hi Arturo. Thank you for the thoughtful and detailed assessment of the AI risk literature. Here are a few other sources you might be interested in reading:

  • AI Timelines: Where the arguments and the "experts" stand summarizes key sources of evidence on AI timelines. Namely, it finds that AI researchers believe AGI will likely arrive within the next few decades, that the human brain uses more computational power than today's largest AI models but that future models will soon surpass human levels of compute, and that economic history suggests transformative changes to growth regimes are absolutely possible. 
  • Jacob Cannell provides more details on the amount of computational power used by various biological and artificial systems. "The Table" is quite jarring to me. 
  • Economic Growth Under Transformative AI by Phil Trammell and Anton Korinek reviews the growth theory literature in economics, finding that mainstream theories of economic growth admit the possibility of a "singularity" driven by artificial intelligence. 
  • Tom Davidson's model uses growth theory to specifically model AI progress. He assumes that AI will be able to perform 100% of economically relevant tasks once it uses the same amount of computation as the human brain. The model shows that this would lead to "fast takeoff": the world will look very normal, yet in a matter of only a few years could see >30% GDP growth and the advent of superintelligent AI systems. 
  • Natural Selection Favors AIs over Humans makes an argument that doesn't depend on how far we are away from AGI -- it will apply whenever advanced AI comes around. 

To respond to your specific argument that:

Chat GTP does not “understand” what it says, because it has been mainly trained with texts. The universe chat GPT inhabits is made of words, and for chat GTP words are not related to their external world referents (because chat GTP does not live in the physical world). Words, for chat GTP are linked and clustered to other words. For chat GTP Language is a self-referential reality, and it is an incredible predator in that ocean made of interlinked words, not in the bio-physical reality made of flows of energy, material cycles and gene pool competition.

Until AIs are immersed in the real world or in a virtual world designed for realism, and their goals are substantially based on that realist virtual world, AGI is not close (no matter how powerful is the core pseudo-neural technology), and existential AI-risk is too low to measure.

To make an affirmative case, there has been lots of work using ChatGPT to operate in the physical world. Google's SayCan found that their PaLM (a language model trained just like GPT) was successfully able to operate a robot in a physical environment. The PiQA benchmark shows that language models perform worse than humans but far better than random chance in answering commonsense questions about the physical world. 

Moreover, recent work has given language models additional sensory modalities so they might transcend the world of text. ChatGPT plugins allows a language model to interact with any digital software interface that can be accessed via the web or code. GPT-4 is trained on both images and text. GATO is a single network trained on text, images, robotic control, and game playing. Personally I believe that AI could pose a threat without physical embodiment, but the possibility of physical embodiment is far from distant and has seen important progress over the past several years. 

Historically, people like Gary Marcus and Emily Bender have been making that argument for years, but their predictions have largely ended up incorrect. Bender and Koller's famous paper argues that language models trained on text will never be able to understand the physical world. Their prove their argument with a prompt in Appendix A on which GPT-2 performs terribly, but if you plug their prompt or any similar styling into ChatGPT, you'll find that it clearly perceives the physical world. Many have doubted the language model paradigm, and so far, their predictions don't hold up well. 

First, I comment about my specific argument.

The link about SayCan is interesting, but the environment looks very controlled and idiosyncratic, and the paper is quite unspecific on the link between the detailed instructions and its execution. It is clear that LLM is a layer between unspecific human instructions and detailed verbal instructions. The relation between those detailed verbal instructions and the final execution is not well described in the paper. The most interesting thing, that is robot-LLM feedback (whether the robot modifies the chain of instructions as a consequence of execution failure or success) is unclear.  I find quite frustrating how descriptive, high level and “results” focused are all this corporate research papers. You cannot grasp what they have really done (the original Alpha Zero white paper! ). 

"Personally I believe that AI could pose a threat without physical embodiment"

Perhaps, but to be interested on defeating us, it needs to have "real world" interests. The space state chat GTP inhabits is massively made of text chains, her interests are mainly being an engaging chatter (she is the perfect embodiment of the Anglo chattering classes!). In fact, my anecdotal experience with chat GTP is that it is an incredible poet, but very dull in reasoning. The old joke about Keynes (too good a writer to trust his economics), but on a massive scale.    

Now, if you train an AI in a physical like virtual word, and her training begins by physical recognition, and then, after that you move into linguistic training, the emergence of AGI would be at least possible. Currently, we have disparate successes in "navigation", "object recognition", “game playing”, and language processing, but IAs have not an executive brain, nor a realist internal world representation.  

Ragardin the Bender and Koller paper,  in March 2023 she was still quite sceptical of the semantic abilities of chat GTP. And chat GTP 4 is still easily fooled when you keep in mind that it does not understand… On the other hand, in my view it is a human level poet (in fact, far beyond the average person, almost in the top 0.1%). Her human or even super human verbal abilities, and its reasoning shortcomings are what can be expected of any (very good) text trained model.

Regarding the links, I really find the two first links quite interesting. The timelines are reasonable (15 years is only 10% probability). What I find unreasonable is to regulate when we are still working on brain tissue. We need more integrative and volitive AI to have anything to regulate. 

I am very skeptical of any use of development, growth, and other historical and classic economics tools for AI. At the end, the classic Popper arguments (in "the Poverty of historicism") that science cannot be predicted are strong. 

Economics is mainly about "equilibrium" results given preferences and technology. Economics is sound in turning (preferences, technolgy) as input and provide "goods allocations" as output. The evolution of preferences and technologies is exogenous to Economics. The landscape of still unknown production possibility frontiers is radically unknown. 

On the other hand find Economics (=applied game theory) as an extremely useful tool to think and create the training world for Artificial Intelligence. As an economist, I find that enviroment is the most legible part of AI programming. Bulding interesting a games and tasks to train AIs is main part its development. Mechanism design (incidentally my current main interest), algorithmic game theory, or agent based economics is directly related to AI in a way no other "classical economics" branch.

Curated and popular this week
Relevant opportunities