S

skluug

397 karmaJoined Apr 2021

Posts
2

Sorted by New
32
skluug
· 2y ago · 11m read

Comments
26

Interesting post! I think analogies are good for public communication but not for understanding things at a deep level. They're like a good way to quickly template something you haven't thought about at all with something you are familiar with. I think effective mass communication is quite important and we shouldn't let the perfect be the enemy of the good.

I wouldn't consider my Terminator comparison an analogy in the sense of the other items on this list. Most of the other items have the character of "why might AI go rogue?" and then they describe something other than AI that is hard to understand or goes rogue in some sense and assert that AI is like that. But Terminator is just literally about an AI going rogue. It's not so much an analogy as a literal portrayal of the concern. My point wasn't so much that you should proactively tell people that AI risk is like Terminator, but that people are just going to notice this on their own (because it's incredibly obvious), and contradicting them makes no sense.

I'm of a split mind on this. On the one hand, I definitely think this is a better way to think about what will determine AI values than "the team of humans that succeeds in building the first AGI".

But I also think the development of powerful AI is likely to radically reallocate power, potentially towards AI developers. States derive their power from a monopoly on force, and I think there is likely to be a period before the obsolesce of human labor in which these monopolies are upset by whoever is able to most effectively develop and deploy AI capabilities. It's not clear who this will be, but it hardly seems guaranteed to be existing state powers or property holders, and AI developers have an obvious expertise and first mover advantage.

I think the first point is subtly wrong in an important way. 

EAGs are not only useful in so far as they let community members do better work in the real world. EAGs are useful insofar as they result in a better world coming to be.

One way in which EAGs might make the world better is by fostering a sense of community, validation, and inclusion among those who have committed themselves to EA, thus motivating people to so commit themselves and to maintain such commitments. This function doesn't bare on "letting" people do better work per se. 

Insofar as this goal is an important component of EAG's impact, it should be prioritized alongside more direct effects of the conference. EAG obviously exists to make the world a better place, but serving the EA community and making EAs happy is an important way in which EAG accomplishes this goal. 

I think this is a great post.

One reason I think it would be cool to see EA become more politically active is that political organizing is a great example of a low-commitment way for lots of people to enact change together. It kind of feels ridiculous that if there is an unsolved problem with the world, the only way I can personally contribute is to completely change careers to work on solving it full time, while most people are still barely aware it exists. 

I think the mechanism of "try to build broad consensus that a problem needs to get solved, then delegate collective resources towards solving it" is underrated in EA at current margins. It probably wasn't underrated before EA had billionaire-level funding, but as EA comes to have about as much money as you can get from small numbers of private actors, and it starts to enter the mainstream, I think it's worth taking the prospect of mass mobilization more seriously. 

This doesn't even necessarily have to look like getting a policy agenda enacted. I think of climate change as a problem that is being addressed with by mass mobilization, but in the US, this mass mobilization has mostly not come in the form of government policy (at least not national policy). It's come from widespread understanding that it's a problem that needs to get solved, and is worth devoting resources to, leading to lots of investment in green technology.

I don't think this is a good characterization of e.g. Kelsey's preference for her Philip Morris analogy over the Terminator analogy--does rogue Philip Morris sound like a far harder problem to solve than rogue Skynet? Not to me, which is why it seems to me much more motivated by not wanting to sound science-fiction-y. Same as Dylan's piece; it doesn't seem to be saying "AI risk is a much harder problem than implied by the Terminator films", except insofar as it misrepresents the Terminator films as involving evil humans intentionally making evil AI.

It seems to me like the proper explanatory path is "Like Terminator?" -> "Basically" -> "So why not just not give AI nuclear launch codes?" -> "There are a lot of other ways AI could take over". 

"Like Terminator?" -> "No, like Philip Morris" seems liable to confuse the audience about the very basic details of the issue, because Philip Morris didn't take over the world. 

I feel like this is a pretty insignificant objection, because it implies someone might going around thinking, "don't worry, AI Risk is just like Terminator! all we'll have to do is bring humanity back from the brink of extinction, fighting amongst the rubble of civilization after a nuclear holocaust".  Surely if people think the threat is only as bad as Terminator, that's plenty to get them to care. 

“Perhaps the best window into what those working on AI really believe [about existential risks from AI] comes from the 2016 survey of leading AI researchers. As well as asking if and when AGI might be developed, it asked about the risks: 70 percent of the researchers agreed with Stuart Russell’s broad argument about why advanced AI might pose a risk; 48 percent thought society should prioritize AI safety research more (only 12 percent thought less). And half the respondents estimated that the probability of the longterm impact of AGI being “extremely bad (e.g., human extinction)” was at least 5 percent. I find this last point particularly remarkable—in how many other fields would the typical leading researcher think there is a one in twenty chance the field’s ultimate goal would be extremely bad for humanity?”

  • Toby Ord, The Precipice

Thanks for reading—you’re definitely right, my claim about the representativeness of Yudkowsky & Christiano’s views was wrong. I had only a narrow segment of the field in mind when I wrote this post. Thank you for conducting this very informative survey.

Thanks for reading! I admire that you take the time to respond to critiques even by random internet strangers. Thank you for all your hard work in promoting effective altruist ideas.

Yeah, you're right actually, that paragraph is a little too idealistic.

As a practical measure, I think it cuts both ways. Some people will hear "yes, like Terminator" and roll their eyes. Some people will hear "no, not like Terminator", get bored, and tune out. Embracing the comparison is helpful, in part, because it lets you quickly establish the stakes. The best path is probably somewhere in the middle, and dependent on the audience and context.

Overall I think it's just about finding that balance.

Load more