When I was first introduced to AI Safety, coming from a background studying psychology, I kept getting frustrated about the way people defined the and used the word "intelligence". They weren't able to address my questions about cultural intelligence, social evolution, and general intelligence in a way I found rigorous enough to be convincing. I felt like professionals couldn't answer what I considered to be basic and relevant questions about a general intelligence, which meant that I took a lot longer to take AI Safety seriously than I otherwise would have. It feels possible to me that other people have run into AI Safety pitches and been turned off because of something similar -- a communication issue because both parties approached the conversation with very different background information. I'd love to try to minimize these occurrences, so if you've had anything similar happen, could you please share: 

What is something that you feel AI Safety pitches usually don't seem to understand about your field/background? What's a common place where you feel you've become stuck in a conversation with AI Safety pitches? What question/information makes/made the conversation stop progressing and start circling? 

New Answer
Ask Related Question
New Comment

7 Answers sorted by

From an economics perspective, I think claims of double-digit GDP growth are dubious and undermine the credibility of the short AI timelines crowd. Here is a good summary of why it seems so implausible to me. To be clear, I think AI risk is a serious problem and I'm open to short timelines. But we shouldn't be forecasting GDP growth, we should be forecasting the thing we actually care about: the possibility of catastrophic risk. 

(This is a point of active disagreement where I'd expect e.g. some people at OpenPhil to believe double-digit GDP growth is plausible. So it's more of a disagreement than a communication problem, but one that I think will particularly push away people with backgrounds in economics.)

I join you in strongly disagreeing with people who say that we should expect unprecedented GDP growth from AI which is very much like AI today but better. OTOH, at some point we'll have AI that is like a new intelligent species arriving on our planet, and then I think all bets are off.


The misleading human-chimp analogy: AI will stand in relation to us the same way we stand in relation to chimps. I think this analogy basically ignores how humans have actually developed knowledge and power--not by rapid individual brain changes, but by slow, cumulative cultural changes. In turn, the analogy may lead us to make incorrect predictions about AI scenarios.

Well, human brains are about three times the mass of chimp brains, diverged from our most recent common ancestor with chimps about 6 million years ago, and have evolved a lot of distinctive new adaptations such as language, pedagogy, virtue signaling, art, music, humor, etc. So we might not want to put too much emphasis on cumulative cultural change as the key explanation for human/chimp differences.

Oh totally (and you probably know much more about this than me). I guess the key thing I'm challenging is the idea that there was something like a very fast transfer of power resulting just from upgraded computing power moving from chimp-ancestor brain -> human brain (a natural FOOM), which the discussion sometimes suggests. My understanding is that it's more like the new adaptations allowed for cumulative cultural change, which allowed for more power.

Political feasibility/nonstarters

Philosophy: Agency

While agency is often invoked as a crucial step in an AI or AGI becoming dangerous, I often find pitches for AI safety oscillate between a very deflationary sense of agency that does not ground worries well (e.g. "Able to represent some model of the world, plan and execute plans") and more substantive accounts of agency (e.g. "Able to act upon a wide variety of objects, including other agents, in a way that can be flexibly adjusted as it unfolds based on goal-representations").

I'm generally unsure if agency is a useful term for the debate at least when engaging with philosophers, as it comes with a lot of baggage that is not relevant to AI safety.

Aris -- great question. 

I'm also in psychology research, and I echo your frustrations about a lot of AI research having a very vague, misguided, and outdated notion of what human intelligence is. 

Specifically, psychologists use 'intelligence' in at least two ways: (1) it can refer (e.g. in cognitive psychology or evolutionary psychology) to universal cognitive abilities shared across humans, but (2) it can also refer (in IQ research and psychometrics) to individual differences in cognitive abilities. Notably 'general intelligence' (aka the g factor, as indexed by IQ scores) is a psychometric concept, not a description of a cognitive ability. 

The idea that humans have a 'general intelligence' as a distinctive mental faculty is a serious misunderstanding of the last 120 years of intelligence research, and makes it pretty confusing with AI researchers talk about 'Artificial General Intelligence'. 

(I've written about these issues in my books 'The Mating Mind' and 'Mating Intelligence', and in lots of papers available here, under the headings 'Cognitive evolution' and 'Intelligence': 

Seems like the problem is that the field of AI uses a different definition of intelligence? Chapter 4 of Human Compatible:

Before we can understand how to create intelligence, it helps to understand what it is. The answer is not to be found in IQ tests or even in Turing tests, but in a simple relationship between what we perceive, what we want, and what we do. Roughly speaking, an entity is intelligent to the extent that what it does is likely to achieve what it wants, given what it has perceived.

To me, this definition seems much broader than g factor. As a... (read more)

What were/are your basic and relevant questions? What were AIS folks missing?

It's been a while since, but from what I remember, my questions were generally in the same range as the framing highlighted by user seanrson's above! 
I've also heard objections from people who've felt that predictions about AGI from biological anchors don't understand the biology of a brain well enough to be making calculations. Ajeya herself even caveats "Technical advisor Paul Christiano originally proposed this way of thinking about brain computation; neither he nor I have a background in neuroscience and I have not attempted to talk to neuros... (read more)

My work (for a startup called Kebotix) aims to use and refine existing ML methods to accelerate scientific and technological progress, focused specifically on discovery of new chemicals and materials.

Most descriptions of TAI in AIS pitches route through essentially the same approach, claiming that smarter AI will be dramatically more successful than our current efforts, bringing about rapid economic growth and societal transformation, usually en route to claiming that the incentives will be astronomical to deploy quickly and unsafely.

However, this step often gets very little detailed attention in that story. Little thought is given to explicating how that would actually work in practice, and, crucially, whether intelligence is even the limiting factor in scientific and technological progress. My personal, limited experience is that better algorithms are rarely the bottleneck.

whether intelligence is even the limiting factor in scientific and technological progress. 

My personal, limited experience is that better algorithms are rarely the bottleneck.


Yeah, in some sense everything else you said might be true or correct.

But I suspect by "better algorithms", I think you thinking along the lines of "What's going to work as a classifier, is this gradient booster with these parameters going to work robustly for this dataset?", "More layers to reduce false negatives has huge diminishing returns, we need better coverage and id... (read more)