It's pretty much generally agreed upon in the EA community that the development of unaligned AGI is the most pressing problem, with some saying that we could have AGI within the next 30 years or so. In The Precipice, Toby Ord estimates the existential risk from unaligned AGI is 1 in 10 over the next century. On 80,000 Hours, 'positively shaping the development of artificial intelligence' is at the top of the list of its highest priority areas. 

Yet, outside of EA basically no one is worried about AI. If you talk to strangers about other potential existential risks like pandemics, nuclear war, or climate change, it makes sense to them. If you speak to a stranger about your worries of unaligned AI, they'll think you're insane (and watch too many sci-fi films).

On a quick scan of some mainstream news sites, it's hard to find much about existential risk and AI. There are bits here and there about how AI could be discriminatory, but mostly the focus is on useful things AI can do e.g. 'How rangers are using AI to help protect India's tigers'. In fact (and this is after about 5 mins of searching so not a full blown analysis) it seems that overall the sentiment is generally positive. Which is totally at odds to what you see in the EA community (I know there is acknowledgement of how AI could be really positive, but mainly the discourse is about how bad it could be). Alternatively, if you search nuclear war, pretty much every mainstream news site is talking about it. It's true we're at a slightly more risky time at the moment, but I reckon most EA's would still say the risk of unaligned AGI is higher than the risk of nuclear war, even given the current tensions. 

So if it's such a big risk, why is no one talking about it?

Why is it not on the agenda of governments? 

Learning about AI, I feel like I should be terrified, but when I speak to people who aren't in EA, I feel like my fears are overblown. 

I genuinely want to hear people's perspectives on why it's not talked about, because without mainstream support of the idea that AI is a risk, I feel like it's going to be a lot harder to get to where we want to be.

55

0
0

Reactions

0
0
Comments47
Sorted by Click to highlight new comments since: Today at 8:59 PM

pretty much generally agreed upon in the EA community that the development of unaligned AGI is the most pressing problem

While there is significant support for "AI as cause area #1", I know plenty of EAs that do not agree with this. Therefore, "generally agree upon" feels like a bit too strong of a wording to me. See also my post on why EAs are skeptical about AI safety

For viewpoints from professional AI researchers, see Vael Gates interviews with AI researchers on AGI risk.

I mention those pieces not to argue that AI risk is overblown, but rather to shed more light on your question.

Thanks for linking these posts, it's useful to see a different perspective to the one I feel gets exposed the most.

Not only is Lukas right to point out that many EAs are skeptical of AI risk, but it isn't even the top priority as selected by EAs, Global Poverty continues to be: https://rethinkpriorities.org/publications/eas2020-cause-prioritization

This is not just a general public/"uninformed masses" phenomenon. It's worth noting that even among AI/ML researchers, AGI concerns and AI safety is niche. Among the people with literally the most exposure to and expertise in AI/ML systems, only a small fraction are focusing on AGI and AI safety. A comparatively larger fraction of them work on "nearterm AI ethics" (i.e. fairness, discrimination and privacy concerns for current ML systems): there is a pretty large conference on this topic area (FAccT), and I do not know if AI safety has a comparably-sized conference.

Why is this? My anecdotal experience with ML researcher friends who don't work on AI safety is that they basically see AGI as very unlikely. I am in no position to adjudicate plausibility of these arguments, but that's the little I have seen.

they basically see AGI as very unlikely

Certainly some people you talk to in the fairness/bias crowd think AGI is very unlikely, but that's definitely not a consensus view among AI researchers. E.g. see this survey of AI researchers (at top conferences in 2015, not selecting for AI safety folk), which finds that:

Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years

That's fascinating! But I don't know if that is the same notion of AGI and AI risk that we talk about in EA. It's very possible to believe that AI will automate most jobs and still not believe that AI will become agentic/misaligned. That's the notion of AGI that I was referring to.

Right, I just wanted to point out that the average AI researcher who dismisses AI x-risk doesn't do so because they think AGI is very unlikely. But I admit to often being confused about why they do dismiss AI x-risk.

The same survey asked AI researchers about the outcome they expect from AGI:

The median probability was 25% for a “good” outcome and 20% for an “extremely good” outcome. By contrast, the probability was 10% for a bad outcome and 5% for an outcome described as “Extremely Bad (e.g., human extinction).”

If I learned that there was some scientific field where the median researcher assigned a 5% probability that we all die due to advances in their field, I'd be incredibly worried. Going off this data alone, it seems hard to make a case that x-risk from AI is some niche thing that almost no AI researchers think is real.

The median researcher does think it's somewhat unlikely, but 5% extinction risk is more than enough to take it very seriously and motivate a huge research and policy effort.

I don't think the answers are illuminating if the question is "conditional on AGI happening, would it be good or bad" - that doesn't yield super meaningful answers from people who believe that AGI in the agentic sense is vanishingly unlikely. Or rather it is a meaningful question, but to those people AGI occurs with near zero probability so even if it was very bad it might not be a priority.

The question was:

Assume for the purpose of this question that HLMI* will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run?

So it doesn't presuppose some agentic form of AGI—but rather asks about the same type of technology that the median respondant gave a 50% chance of arriving within 45 years.

*HLMI was defined in the survey as:

“High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.

This is a really useful (and kind of scary) perspective. Thanks for sharing.

I think for other prominent x-risks like pandemics, climate change and nuclear war, there are “smaller scale versions” which are well known and have caused lots of harm (Ebola / COVID, extreme weather events, Hiroshima and Nagasaki). I think these make the risk from more extreme versions of these events feel realistic and scary.

For AI, I can’t think of what smaller scale versions of uncontrollable AI would look like, and I don’t think other faults with AI have caused severe enough / well known enough harm yet. So AI x-risk feels more like a fiction thing.

So based on my ideas on this, raising awareness on immediate harms from AI, (eg - surveillance uses, discrimination) could make the x-risk feel realistic too (at least the x-risk from AI controlled by malicious actors) - this could be tested in a study. But I can’t think of a smaller scale version of uncontrollable AI per se.

This is a really great point and makes sense as to why it's a lot less spoken about than other risks.

I mostly buy the story in this post.

without mainstream support of the idea that AI is a risk, I feel like it's going to be a lot harder to get to where we want to be.

The consequences of everyone freaking out about AI are not obviously or automatically good. (I'm personally optimistic that they could be good, but people waking up to AI by default entails more racing toward powerful AI.)

Definitely a fair point!

[anonymous]2y13
0
0

A lot of it, I would guess, comes down to lack of exposure to the reasonable version of the argument.

A key bottleneck would therefore be media, as it was for climate change.

Trying to upload "media brain", here are reasons why you might not want to run a story on this topic (apologies if it seems dismissive, that is not the intent, I broadly sympathize with the thesis that AGI is at least a potential major cause area, however it may be helpful to channel where the dismissiveness comes from on a deeper level)...

-The thesis is associated with science-fiction. Taking science-fiction seriously as an adult is associated with escapism and a refusal to accept reality.

- It is seen as something only computer science "nerds", if you'll pardon the term, worry about. This makes it easy to dismiss the underlying concept as reflecting a blinkered vision of the world. The idea is that if only those people went outside and "touched grass", they would develop a different worldview - that the worldview is a product of being stuck inside a room looking at a computer screen all day. 

- There is something inherently suspicious about CS people looking at the problem of human suffering and concluding that more than helping people in tangible ways in the physical world, the single most important thing other people should be doing is to be more like them and to do more of the things that they are already doing. Of course the paper clip optimizer is going to say that increasing the supply of paper clips is the most pressing problem in the universe. In fact it's one reason why media people love to run stories about the media itself!

- In any case there appears to be no theory of change - "if only we did this, that would take care of that".  At least with climate change there was the basic idea of reducing carbon emissions. What is the reducing carbon emissions of AI X-risk? If there is nothing we can do that would make a difference, might as well forget about it. 

What might help overcome these issues?

To start with you'd need some kind of translator, an idea launderer.  Someone 

  1. well-known
  2. credible (so not a Hollywood actor)
  3. who isn't coded as a "nerd" (someone for whom your prior guess would be low that they would care about this topic)
  4. who isn't too polarizing (no eccentric billionaire or polarizing politician).

Basically, a human equivalent of the BBC. Then have them produce documentaries on the topic - some might take off.

Thanks for this comment, it's exactly the type of thing I was looking for. The person who comes to mind for me would be Louis Theroux...although he mainly makes documentaries more on people than on 'things' nonetheless some of his documentaries grapple with issues around ethics or morality. Think it might be a bit of stretch to imagine this actually happening though, but he does meet the criteria I would say.

[anonymous]2y6
0
0

Babbling without any pruning:

- It is very difficult to imagine such an AI
- Currently, computers and AI in video games are stupid
- the collective representations are such as Ex Machina or Terminator reinforce the idea that it is only fiction
- understanding  the orthogonality thesis requires a fine-grained epistemology to dissociate two concepts often linked in practice in everyday life
- Loss of status for academics
- It is possible that it is really too complicated for an AI of just higher level than a human-level to design a nanoconductor factory. In order to understand the risks, we have to take a step back on history, understand what an exponential curve is, and tell ourselves that a superintelligence will arrive later on

These are useful points, thank you!

The stuff in the news does not equal the stuff that's actually most important to learn and talk about. Instead, it's some combination of (a) stuff that sells clicks/eyeballs/retweets, and (b) the hobbyhorses of journalists and editors. Note that journalists usually have an extremely shallow understanding of what they write about, much more shallow than you'd like to believe. (See: Gell-Mann amnesia.)

(Think about the incentives. If there was an organization that was dedicated to sifting through all the world's problems and sorting them by their all-things-considered importance to the world, and then writing primarily about the problems judged to be maximally important... are you imagining that this organization's front page would be wildly popular, would become the new New York Times or Fox News or whatever and have hundreds of millions of viewers? No, it would be a niche thing. People would disagree with its conclusions about what's important, often without even having given them more than two second's thought, and even the people who agree with its conclusions might be bored and go read other publications instead.)

The stuff on the agendas of governments, unfortunately, also does not equal the stuff that's actually most important. Politicians have arguably less of an understanding of most things than journalists.

 

If you speak to a stranger about your worries of unaligned AI, they'll think you're insane (and watch too many sci-fi films).

Have you actually tried this? (How many times?)

Talking to lots of normies about AI risk is on my to-do list, but in the meantime there are some relevant surveys, and while it's true that people don't often bring up AI as a thing-they're-concerned-about, if you ask them about AI a lot of people seem pretty concerned that we won't be able to control it (e.g., the third of these surveys finds that 67% of respondents are concerned about "artificial intelligence becoming uncontrollable").

(Kelsey Piper has written up the second survey linked above.)

That's definitely unexpected, I wonder if maybe when people are questioned about it, they say they're worried about it...but that generally it's not on their mind as a big worry. I could be wrong though.

If you speak to a stranger about your worries of unaligned AI, they'll think you're insane (and watch too many sci-fi films).

I'm not so sure this is true. In my own experience, a correct explanation of the problem with unaligned AI makes sense to almost everyone with some minimum of reasoning skill. Although this is anecdotal, I would not be surprised if an actual survey among "strangers" would show this too.

Commenting on your general point, I think the reason is that most people's sense of when AGI could plausibly happen is "in the far future", which makes it psychologically unremarkable at this point.

Something like extinction (even if literal extinction is unlikely) from climate change, although possibly further off in time, might feel closer to a lot of people because climate change is already killing people.

Similarly, I find that almost everyone in my social circle agrees that "extremely advanced AI could be extremely dangerous." Then I mention that way more AI researchers focus on AI capabilities than long-term safety (based on the AI section in the "official" EA introduction), and people continue to nod. They certainly don't think I'm insane.

Nitpicking here, but I do not believe that AI is the most pressing x-risk problem, as opposed to a pressing one:

It's pretty much generally agreed upon in the EA community that the development of unaligned AGI is the most pressing problem

Added 2022-08-09: The original claim was that AGI is the most pressing problem from a longtermist point of view, so I've edited this comment to clarify that I mean problem, not x-risk. To prove that AGI is the most pressing problem, one needs to prove that it's more cost-effective to work on AGI safety than to work on any other x-risk and any broad intervention to improve the future. (For clarity, a "pressing" problem is one that's cost-effective to allocate resources to at current margins.)

It's far from obvious to me that this is a dominant view: in 2021, Ben Todd said that broad interventions like improving institutional decision-making and reducing great power conflict were the largest resource gap in the EA cause portfolio.

IIRC, Toby Ord's estimates of the risk of human extinction in the Precipice basically come entirely from AI and everything else is a rounding error. Since then, AI has only become more pressing. I think it is probably fair to say that "AI is the most pressing x-risk" is a dominant view.

No, you're probably thinking of anthropogenic risk. AI is 1/10, whereas the total estimated x-risk is 1/6.

I don't think we should defer too much to Ord's x-risk estimates, but since we're talking about them here goes:

  • Ord's estimate of total natural risk is 1 in 10,000, which is 160 times less than the total anthropogenic risk (1 in 6).
  • Risk from engineered pandemics (1 in 30) is within an order of magnitude of risk from misaligned AI (1 in 10), so it's hardly a rounding error (although simeon_c recently argued that Ord "vastly overestimates" biorisk).

Ah yes that's right. Still AI contributes the majority of x-risk.

You think there's an x-risk more urgent than AI? What could be? Nanotech isn't going to be invented within 20 years, there aren't any asteroids about to hit the earth, climate tail risks only come into effect next century, deadly pandemics or supervolcanic eruptions are inevitable on long timescales but aren't common enough to be the top source of risk in the time until AGI is invented. The only way anything is more risky than AI within 50 years is if you expect something like a major war leading to usage of enough nuclear or biological weapons that everyone dies, and I really doubt that's more than 10% likely in the next half century.

Okay, fine. I agree that it's hard to come up with an x-risk more urgent than AGI. (Though here's one: digital people being instantiated and made to suffer in large numbers would be an s-risk, and could potentially outweigh the risk of damage done by misaligned AGI over the long term.)

Thank you for sharing this! Really great post - answers lots of the questions that I had, thank you.

I don't know what a mass movement to align AGI would look like (so how could I build one?), and I'd expect polarization dynamics to undermine it and make the situation worse (so I wouldn't want to). It's possible this explains why it hasn't happened. It's also inherently somewhat incompatible with polarization (it affects everyone equally), which will reduce the amount of air it will be given.

My experience is that a pretty large proportion of people actually do find the alignment problem immediately concerning when it's explained well. But then they can't see anything they can do about it so they rightly don't think about it very much. This is probably for the best.

I'd discourage trying to promote the issue (beyond AI research communities) until someone can present a detailed, realistic story as to how making it political could possibly help. (~ I've written up a big part of that story... but there's an unrealized technical dependency...)

I recently found a Swiss AI survey that indicates that many people do care about AI.
[This is only very weak evidence against your thesis, but might still interest you 🙂.]

Sample size:
Population – 1245 people
Opinion Leaders – 327 people [from the economy, public administration, science and education]

The question: 
"Do you fear the emergence of an "artificial super-intelligence", and that robots will take power over humans?"

From the general population, 11% responded "Yes, very", and 37% responded "Yes, a bit". 
So, half of the respondents (that expressed any sentiment) were at least somewhat worried.

The 'opinion leaders' however are much less concerned. Only 2% have a lot of fear and 23% have a bit of fear.

These are interesting findings!  Be interesting to see if these kind of results are similar elsewhere. 

But the same study also found that only 41% of respondents from the general population placed AI becoming more intelligent than humans into the 'first 3 risks of concern' out of a choice of 5 risks. 
Only for 12% of respondents was it the biggest concern. 'Opinion leaders' were again more optimistic – only 5% of them thought AI intelligence surpassing human intelligence was the biggest concern.

Question: "Which of the potential risks of the development of artificial intelligence concerns you the most? And the second most? And the third most?"
Option 1: The risks related to personal security and data protection.
Option 2: The risk of misinterpretation by machines.
Option 3: Loss of jobs.
Option 4: Artificial intelligence that surpasses human intelligence.
Option 5: Others

A few random thoughts I have on this:

I've tried speaking to a few non-EA people (very few, countable on hand)  and I kind of agree that they think you've watched way too much sci-fi when talking about AI safety but they don't think it's too far-fetched. A specific conversation I remember having made me realize that one reason might be that a lot of people simply think that they cannot do much about it. 'Leave it to the experts' or 'I don't know anything about AI and ML' seems to be a thought that non-EA people might have on the issue, hence preventing them from actively trying to reduce the risk, if it finds a way into their list of important problems at all. There's also the part about AI safety not being a major field leading to misconceptions like the need for a compsci PhD and a lot of technical math/CS knowledge to work in AI safety when there actually exist roles that do not require such expertise. This quite obviously prevents them from changing their career to work in AI safety, but, even more so, it discourages them to read about it at all (this might also be the reason why distillation of AI alignment work is in high demand) even though we see people read about international conflicts, nuclear risk, and climate change more frequently (I'm not sure of the difference in scale but I can personally vouch for this since I had never heard of AI alignment before joining the EA community).

I hadn't thought of the fact that people may think they have no power so just kind of...don't think about it. I suppose more work needs to be done to show that people can work on it.

Its like the same problem with VR, because no one cares!

AI is an overblown technological elephant in the room, it offers up nothing useful to the average user. Because the average user already knows it will cost a sub to gain anything useful out of it, then there is the level of trust issue. Economy is less likely to trust big tech today than yesteryear. Also because AI isn't anything new just like VR.

Bottom line: No one cares about filling fat pockets anymore.

Great post! I'm gonna throw out two spicy takes.

Firstly, I don't think it's so much that people don't care about AI Safety, I think it's largely who cares about a threat is highly related to who it affects. Natural disasters etc affect everyone relatively (though not exactly) equally, whereas AI harms overwhelmingly affect the underpriviliged and vulnerable. People who are vastly underrepresented in both EA and in wider STEM/academia, who are less able to collate and utilise resources, who are less able to raise alarms. As a result, AI Safety is a field where many of the current and future threats are hidden from view.

Secondly, AGI Safety as a field tends to isolate itself from other areas of AI Safety as if the two aren't massively related, and goes off on kind of a majorly theoretical angle as a result. As a consequence, AGI/ASI Safety folk are seen as something of living in a fantasy world of their own making compared to lots of other areas of AI risk by both the public and people within AI. I don't personally agree with this, but it's something I hear a lot in AI research.

The first argument seems suspect on a few levels.

  1. No argument about AGI risk that I've seen argues that it affects the underprivileged most. In fact, arguments emphasize how every single one of us is vulnerable to AI and that AI takeover would be a catastrophe for all of humanity. There is no story in which misaligned AI only hurts poor/vulnerable people.
  2. The representation argument doesn't make sense as it would imply that EA, as a pretty undiverse space, would not care about AGI risk. That is not the case. Moreover it would imply that there are many suppressed advocates for AI safety among social activists and leaders of underprivileged groups. That is definitely not the case.
  1. No argument about AGI risk that I've seen argues that it affects the underprivileged most. In fact, arguments emphasize how every single one of us is vulnerable to AI and that AI takeover would be a catastrophe for all of humanity. There is no story in which misaligned AI only hurts poor/vulnerable people.

You're misunderstanding something about why many people are not concerned with AGI risks despite being sympathetic to various aspects of AI ethics. No one concerned with AGI x-risk is arguing it will disproportionately harm the underprivileged. But current AI harms are from things like discriminatory criminal sentencing algorithms, so a lot of the AI ethics discourse involves fairness and privilege, and people concerned with those issues don't fully appreciate that misaligned AGI 1) hurts everyone, and 2) is a real thing that very well might happen within 20 years, not just some imaginary sci-fi story made up by overprivileged white nerds.

There is some discourse around technological unemployment putting low-skilled employees out of work, but this is a niche political argument that I've mostly heard of proponents of UBI. I think it's less critical than x-risk, and if artificial intelligence gains the ability to do diverse tasks as well as humans can, I'll be just as unemployed a computer programmer as anyone else is as a coal miner.

This is the opposite of the point made in the parent comment, and I agree with it.

You raise some fair points, but some others I would disagree with. I would say that just because there isn't a popular argument that AGI risk affects underpriviliged people the most, doesn't make it not true.  I can't think of a transformative technology in human history that didn't impact people more the lower down the social strata you go, and AI thus far has not only followed this trend but greatly exaccerbated it. Current AI harms are overwhelmingly targetted towards these groups. I can't think of any reason why much more powerful AI such as AGI would for whatever reason buck this trend. Obviously if we only focused on existential risk this may not be the case, but even a marginally misaligned AGI would exaggerate current AI harms, particularly in suffering ethics cases.
 

People are concerned about AGI because it could lead to human extinction or civilizational collapse. That really seems like it affects everyone. It's more analogous to nuclear war. If there was a full scale global nuclear war, being privileged would not help you very much.

Besides, if you're going to make the point that AI is just like every other issue in affecting the most vulnerable, then you haven't explained why people don't care about AI risk. That is, you haven't identified something unique about AI. You could apply the same argument to climate change, to pandemic risk, to inequality. All of these issues disproportionately affect the poor, yet all of them occupy substantially more public discussion than AI. What makes AI different?

Curated and popular this week
Relevant opportunities