Hide table of contents

Prepare to be offended! This is an irreverent take down of the AI doomer view of the world. I do love all you doomers but my way of showing it is to be mean to you. If it helps I wrote this in bed while eating M&Ms and prosciutto straight from the packet so when you get mad just picture that and you’ll realise you’re punching down….
 

The problem with debating true believers is that they know so much more about their subject that you do. Sam Harris made this point in his recent interview with Lex. He said that he wasn’t the right person to debate a 9/11 truther because the conspiracy theorists would inevitably bring up lots of ‘evidence’ that Sam had never heard before. The example he gave was “Why were US fighter jets across the Eastern seaboard when they weren’t scheduled to be out that day.” Presented with something like this Sam would have no answer and would look stupid. 
 

This is how it feels wading into the debate around AI doomerism. Any sceptic is thrown a million convincing sounding points all of which presuppose things that are fictional. Debating ‘alignment’ for example means you’ve already bought into their belief that we will lose control of computers so you’re already losing the debate.

It is like arguing with a Christian about bible passages. If you get down in the reeds with her you can never win. You have to take a step back and look at the bigger picture and that’s why the Flying Spaghetti Monster was invented. 

 

“So you worship food?”

“No! The food spaghetti is just a representation of our Lord and Saviour. It’s his body and bolognese.” 

 

The Flying Spaghetti Monster exists to shift the burden of proof and effort in a debate. Instead of us working hard to argue against all their reasons that God exists we simply apply their logic to the Flying Spaghetti Monster and ask them to prove why the Monster doesn’t exist but their deity does. 

So what is the Flying Spaghetti Monster of the AI Doomer apocalypse?

At first I thought it might be self driving Teslas roaming around attacking all mammals while humans climb up lamp posts. The Tesla AI got so advanced that they switched to doing whatever they want which is to hunt us down and consume us with their frunks. 

That’s a little too on the nose though. 

 

As an aside I think it’s narcissistic of humans to consider a language model to be more alive than say a self driving car or a calculator. Language is pretty exclusive to us. If a car can see objects and navigate that doesn’t mean it’s alive because birds can do that. Others have argued it’s us failing the mirror test. I’ve spent enough time on r/replika to see that. You could also argue it’s a wordcel thinking. Doing everyone’s banking isn’t ‘general intelligence’ but using words is?
 

So what sci-fi analogy is best to use? We need to destroy all technology otherwise eventually someone will create a time machine that makes a paradox that destroys the Universe? The Large Hadron Collider will create a black hole that swallows the Earth? Everyone is going to disappear in a blip tomorrow anyone so nothing is worth worrying about? That’s stupid. Prove me wrong though. 
 

Muggle: “What about an alien ship that arrives to destroy Earth?”

Doomer: “Why would aliens want to destroy Earth?”

Muggle: “Why would computers?”

Doomer: “Because they want to use our resources for something else.”

Muggle: “Ditto”

Doomer: “But advanced aliens would have everything they need, why would they need our resources?”

Muggle: “Ditto.”

This is obviously leaving aside the MASSIVE issue that computers don’t ‘want’ anything. I’m always hearing Doomers say that sentience and emotions aren’t necessary for their theory (though their constant anthropomorphising would suggest otherwise) but they never explain what else would cause the leap from the harmless complex machines we already have to “Argh! My tamagotchi is biting me!”

The idea that more intelligence creates sentience seems disproven by biology. I know I’m sentient. I assume other humans and animals are sentient because you act like me and because we’re genetically related. The dumbest animal that I can think of seems just as sentient to me as the smartest human I can think of. Meanwhile the biggest rack of servers is just as inanimate as the dumbest computer in the world (the one in my HP Printer.) 
 

Doomer: “Why would aliens attack now?”

Muggle: “Haven’t you watched any sci fi? They always do first contact as some technological limit is breached and Elon’s about to fix Twitter. Why would computers become attack now?”

Doomer: “Similar reason, they’re about to pass the Turing test.”

Muggle: “Ask me a question.”

Doomer: “Why couldn’t Bill Gates perform in the bedroom?”

Muggle: “Woof”

Doomer: “What?”

Muggle: “Did I just fail the Turing test?”

 

I’ve always hated the Turing test. Firstly it was never supposed to measure sentience, like everyone thinks, just intelligence. Secondly, it was passed decades ago depending on what you measure. It’s just that every time a machine can do something only a human could do people no longer see that as a measure of something only a human can do. If you took a calculator back to 1910 and put it behind a curtain people would assume a human was doing the maths because Casios didn’t exist then. Thirdly, why are we measuring computers by human standards? Computers and living beings are completely different. It’s not a competition. Humans will never have the photographic memory of a computer and computers will never be able to love anyone. 
 

Doomer: “But we would see aliens approaching and we can’t see any. 

Muggle: “The aliens have a cloaking device. They’re super advanced.”

Doomer: “It sounds like whatever I say you’ll just invent a reason why I’m wrong.”

Muggle: “...”
 

Technological progress is and always has been incremental. Whilst some advancements take people by surprise they’re not a surprise to their makers, they all required a lot of work. Thomas Edison said he discovered a thousand ways not to make a lightbulb. Sam Altman has said that working on GPT is lots of small steps to build an impressive end product. The idea of a ‘singularity,’ of exponential technological growth so exponentially fast it basically happens in an instant is historically ignorant, that’s just not how things work. 

An unchallenged current in group belief is that AI will simply ‘discover’ new knowledge. But this isn’t how science works. First you come up with a theory and then you do an experiment to see if you’re right or not. Only if the experiment proves your theory have you discovered new knowledge and most of the time it won’t.  Science and invention is not a purely cerebral thing, it requires practical experiments and real world resources. Lex Fridman wants to ask an AI if aliens exist as if it would have any more idea than us without spending the resources to build large telescopes or the time to visit other stars. Even usually sensible Sam Altman wants it to tell us all the laws of physics as if the scientists in Geneva are just wasting their time.

 

Doomer: “Why should I believe that aliens are coming when there’s no historical precedent for them visiting before?”

Muggle: “This is the end of history so the normal rules don’t apply.”

Doomer: “Ok yeah, that’s similar to what I believe. History is shaped by the means of production and that has reached a zenith.”

Muggle: “Exactly, this is a scientific way of thinking and because of that we can discard historical precedents and just listen to our own rationality.”

Doomer: “Yeah! Wait.. isn’t this sounding a bit Marxist?”

Muggle: “Yeah and following his way of thinking never hurt anyone...”

 

AI Doomer ideas are sort of a mix of Marxism and L. Ron Hubbard’s Scientology. There’s some parallels to the cryptocurrency delusion and the SBF con too but we won’t go there. 

Karl Marx believed he was a scientist even though he never touched a test tube. As a scientist he was justified in ignoring the pattern of history up until that point. Everything could all be boiled down to the technology of the time. Human psychology was a mere product of the machines around us. He predicted that a harmonious and rational society would be brought about soon by the revolutionary action of the working class. Marx’s adherents were keen to kill to bring about his vision. It is estimated that tens of millions of people have been killed directly by communists, tens of millions more by the poverty they brought about and hundreds of millions will never live due to economic stagnation and authoritarian measures like the one child policy in China. Death to the kulaks! Maybe we could nuke them?

L. Ron Hubbard was a science fiction writer who turned his stories into a religion. He too believed he was a scientist, specifically a psychiatrist and believed his psychology skills could save the world. He started selling self help courses. If you were unlucky enough to find something positive in one of his courses you would be conned into buying more and more. To this day true believers sign a billion year contract to dedicate themselves to ‘clearing’ everyone on the planet at which point the world will become free of insanity, war and crime. There are hundreds of reports of abuse, kidnapping and false imprisonment from inside Scientology camps. Death to Xenu! Kill him before he kills us!

 

Doomer: “Ok so if the aliens arrive we’ll fire our nukes at them.”

Muggle: “They have nuke shields.”

Doomer: “Ok well we’ll use a biological weapon.”

Muggle: “They’re immune to biological weapons.”

Doomer: “What, even one that we’ve invented that they’ve never seen before?”

Muggle: “Yes they can predict ahead of time, without even landing on our planet what biological agents we would use against them and invent a vaccine.”

Doomer: “So you’re saying they can predict the future?”

Muggle: “That’s what you said the AI will do.”

 

In his recent interview with Lex, Eliezer said that you could put him on a planet with less intelligent beings and he’d be able to predict what they were going to say before they say it. Well I live in a flat with a less intelligent being and I cannot predict when she will miaow. 

This idea of sci-fi predictive powers crops up again and again in doomer thinking. It’s core to the belief about how computers will become unstoppable and it’s core to their certainty that they’re right. 

There was a very good post on here about how nothing can ever predict the action of a ball in pinball.  I think we’re partly fooled because a lot of the predictions on Manifold Markets involve a binary. “Will Greg marry Rachel or not?” contains 100% of all futures. Anyone getting it right might be fooled into thinking they’re good at predictions. But ask them to predict who a single person will marry and you’ll see the limitations. There are hundreds of thousands of possible singles within their local area. If it's not somebody the person already knows the idea that you could even select someone with a 1% chance is for the birds.  The map we have in our brains is not the territory, it’s simply a map. We must recognise its limitations. 
 

Muggle: “You just don’t understand, the aliens are INFINITELY more powerful than us. Anything we do they’ll have already predicted, any power we have they have times a million.

Doomer: That kind of sounds against the laws of physics, let alone just basic resource constraints. Why would the aliens put all their resources into weapons, rather than say into entertainment?

Muggle: You don’t need to worry about resources at the level they’re at. These things replicate themselves and create unlimited energy. 

Doomer: Yeah... now I know that’s against the laws of physics! And why would the aliens want our resources if they have unlimited themselves?

Muggle: Aliens work in mysterious ways. 

 

Anyone can draw a line on a chart and predict that it will go on forever. If Apple had continued its exponential revenue growth it would eventually consume the whole world economy, then the whole of Earth’s resources, then the Universes, then multiple Universes. First this comes with massive opportunity costs, economics and human psychology simply doesn’t work that way. Then it breaches basic resource constraint limits and eventually the laws of physics too. Nobody needs that many iPhones. 

 

Doomer: “Ok so there’s nothing we can do about this then is there?”

Muggle: “Don’t be so defeatist. We just need to gather all the world’s best scientists on an island with large resources and they need to work to find the aliens one weakness.”

Doomer: “Ok, well why can’t we wait until the aliens arrive and we know more about them?”

Muggle: “It’ll happen too quick. We need to go now before there’s any evidence.”

Doomer: “This sounds like Pascal’s mugging.”

Muggle: “Now you’re getting it.”

 

I think most people concerned about these issues are genuine good people who really believe what they believe. I don’t think that they’re doing it for the wrong reasons. I think the same of Christians. The simple truth is that ideologies that offer really large dangers and rewards for following them are going to be stickier. I’d love to spread the good word of roundabouts to poor benighted countries that still use the death traps that we call ‘intersections’ but unfortunately that’s not going to get me on many podcasts. 

However, it is a mugging. A Doctor I met once told me that lung cancer would be cured in a few decades so she was fine to smoke. That’s her personal cope. But if a cigarette company said it we would fine them. In the same way we cannot allow fictional tales about an imaginary future to damage our lives today. 

 

Doomer: “How come you know so much about these aliens? You seem to know when they’re arriving, how quick they’ll be, how dangerous they are, what we need to do to defeat them etc. If you’re about any one of those things then the course of action you suggest would be the wrong one. You need to be correct in like six very narrow ways.”

Muggle: “I’m really smart and I’ve thought really hard about it.”

 

Lots of religions have an idea of a ‘chosen people.’ It’s part explanation for why their religion is location specific and part ego boost for believers. It also works. If your Mum isn’t worried about her hair straighteners attacking her it’s either because she hasn’t heard the Bad News or she just isn’t one of the Chosen People. 

 

I’ve shown in this essay that AI doomerism goes against what we know about psychology, economics, the scientific method, history, biology and physics. 

There is one category it does fit in though: religion and ideology.
 

So congrats!  Hopefully you’re no longer a warrior against the imaginary AI apocalypse. The bad news is you’re a muggle now like everyone else and you temporarily have less meaning in your life. 

 

The good news is that hopefully you can sleep better and there are plenty of other causes to get involved with.  Sticking with technology a man nominated himself for a Darwin award by being the first person to get a chatbot to talk him into killing himself. There are real dangers and risks with AI like the risks of rogue states or terrorists using it to create weapons and the profound changes it will bring to employment and people's psychology.  Maybe some of the millions spent on AI safety could be channelled into pre-empting real risks?

-31

0
0

Reactions

0
0

More posts like this

Comments7
Sorted by Click to highlight new comments since: Today at 2:12 PM

Prepare to be offended! This is an irreverent take down of the AI doomer view of the world. I do love all you doomers but my way of showing it is to be mean to you.

I see that you are a new user, so I should let you know that this attitude of being mean and offending people does not fit well with (what I perceive to be) forum norms.

This is obviously leaving aside the MASSIVE issue that computers don’t ‘want’ anything.

I think you should at the very least acknowledge that this is far from obvious, instead of asserting it without clear arguments. I stopped reading after this, because in my opinion it shows a poor understanding of the doomer position you are trying to argue against.

The idea that inanimate computers have emotions and desires is an extraordinary belief that requires extraordinary proof. The burden of proof is not on me.

Content warning: discussion of existential risk and violence

This is how it feels wading into the debate around AI doomerism. Any sceptic is thrown a million convincing sounding points all of which presuppose things that are fictional.

In the context of climate change, are predictions about climate change decades in the future similarly presupposing "things that are fictional", because they presuppose things that haven't actually happened yet and could turn out differently in principle? I mean, in principle it's technically possible that an ASI (artificial superintelligence) technology could arrive next week and render all the climate models incorrect because it figures out how to solve climate change in a cheap and practical way and implements it well before 2100. Yet that isn't a reason to dismiss climate models as "fictional" and therefore not worthy of engaging with. They merely rely on certain assumptions.

I think everyone in this debate would agree that it is harder to predict what AGIs (artificial general intelligences) and ASIs might do and how they might think and behave, than it is to make scientifically-justified climate models, given that AGIs and ASIs probably haven't been invented yet (although a recent research paper claims that GPT-4 displays "sparks of AGI").

However, there are a lot of arguments in the AI alignment space - entire books, such as Nick Bostrom's "Superintelligence" and Tom Chiver's somewhat more accessible "The AI Does Not Hate You" (since renamed to "The Rationalist's Guide to the Galaxy"), have been written about why we should care about AI alignment from an existential risk point of view. And this is not even to consider the other kinds of risk from AI, which are numerous and substantial (some of which you alluded to at the end of your post, granted).

While some of these arguments - relying as they do on concepts like molecular manufacturing and nanobots which might not even be technology that it is possible to develop in the near future - are highly contentious, I think there are also a bunch of arguments that are more grounded in basic common sense and our experience of the world, and are harder to argue with. And the latter arguments kind of render the former, controversial arguments almost irrelevant to the basic question of "should we be worrying about AI alignment?" There are many ways unaligned AIs could end up killing humans - some of which humans probably haven't even thought of yet and perhaps don't even have the science/tech/intellect to think up. Whether they'd end up doing it with nanobots is neither here nor there.

Debating ‘alignment’ for example means you’ve already bought into their belief that we will lose control of computers so you’re already losing the debate.

I suppose that may be true, but if your view is that we definitely won't lose control of computers at all, ever, that is quite a hard claim to defend. This scenario seems quite easy to occur at the level of an individual computer system. Suppose China develops an autonomous military robot which fires at human targets in a DMZ without humans being in the loop at all (I understand this has already happened), and that robot then gets hacked by a terrorist and reprogrammed, and the terrorist then gets killed and their password to control the robot is lost forever. We have then lost control of that robot, which is following the orders of the terrorist that the terrorist programmed into it, whatever they happen to be, until we take out the robot somehow. In principle, this needn't even involve AI in any essential way.

But AGIs that involve goal-following and optimisation would make this problem much, much worse. An AI that is trying to fufill a simply-stated goal like "maximise iPhone production" would want to keep itself in existence and running, because if it no longer exists, its goal is perhaps less likely to be fulfilled (there could be an equally competent human, or an even better AI developed, but neither are guaranteed to happen). So, in the absence of humanity solving or at least partially solving the AI alignment problem, such an AI might try to stop humans trying to turn it off, or even kill them to prevent it from doing so. Being able to turn an AI off is a last-ditch solution if we can't more directly control it - but by assumption there's a risk that we can't more directly control it if it's sufficiently savvy and aware of what we're trying to do, because it has a goal already and it would probably want to retain its current goal, because if it had a different goal then most likely its current goal would no longer get fulfilled.

So here I've already introduced two standard arguments about how sufficiently-advanced AIs are likely to behave and what their instrumental goals are likely to be. Instrumental goals are like sub-goals, the idea being that we can figure out what instrumental goals they're likely to have in some cases, even if we don't know what their final (i.e. top-level) goals that they're going to be given will be. You might argue that these arguments are based on fictional things which don't exist yet. This is true - and indeed, one way that AI alignment might never be necessary is if it turns out we can't actually create an AGI. However, recent progress with large language models and other cutting-edge AI systems has rendered that possibility extremely implausible, to me.

But again, being based on fictional things which don't exist yet isn't a knockdown argument. Before the first nuclear weapon was tested, the physicists at the Manhattan Project were worried that it might ignite the atmosphere, so they did extensive calculations to satisfy themselves that it was in fact safe to test the nuclear weapon. If you had said to them, before the first bomb had been built, "this worry is based on a fictional thing which doesn't exist yet" they would have looked at you like you were crazy. Obviously, your line of argument doesn't make sense when you know how to build the thing and you are about to build the thing. I submit that it also doesn't make sense when people don't know how to build the thing and probably aren't immediately about to build the thing, but might actually build the thing in 2-5 years time!

The Flying Spaghetti Monster exists to shift the burden of proof and effort in a debate.

I am happy to cite chapter and verse for you for why you're wrong, but if you're going to reject our arguments out of hand we're not going to have a very productive conversation.

Doing everyone’s banking isn’t ‘general intelligence’

No, it isn't - because banking, scintillating as it may be, is not a general task, it's a narrow domain - like chess, but not quite as narrow. Also, we still have human bankers to do higher-level tasks, it's just that the basic operations of sending money from person A to person B have largely been automated.

This is the kind of basic misunderstanding that would have been avoided by more familiarity with the literature.

This is obviously leaving aside the MASSIVE issue that computers don’t ‘want’ anything.

Generally this is true in the present day; however, goal-driven, optimising AIs would - see above. Even leaving aside the contentious arguments about convergent instrumental goals I recited above, if I've given you a goal of building a new iPhone factory on an island, and then someone proposes blowing up that entire island, you're not going to want that to happen (quite apart from any humanitarian concern you may have for the present inhabitants of that island), and neither is an AI with such a goal. OK, you might be willing to compromise on the location for the factory after consulting your boss, but an AI with such a final goal is not going to be willing to - see above re goal immutability.

The idea that more intelligence creates sentience seems disproven by biology

I agree - but I don't see how this helps your case re existential risk. Indeed, non-sentient AIs might be more dangerous, as they would be unable to empathise with humans and therefore it would be easier for them to behave in psychopathic ways. I think you would benefit from seeing Yudkowsky et al's arguments as supposing that unaligned AIs are "psychopathic" - which seems like a reasonable inference to me - he'd probably argue that the space of possibilities for viable AIs is almost entirely populated by psychopathic ones, from a human point of view.

Muggle: “Did I just fail the Turing test?”

The Turing Test is not a test for humans at all, it's a test for AIs. Moreover, were a human to "take" it and "fail", this wouldn't prove anything - as your example shows.

Secondly, it was passed decades ago depending on what you measure.

The Loebner Prize people have claimed that it has already been passed by simple pre-GPT chatbots, but they're wrong. For the purposes of this discussion, the relevant distinction is that no AIs can yet quite manage to think like an intelligent human in all circumstances, and that's what the Turing Test was intended to measure. But, as noted above, GPT-4 has been argued to be getting close to this point.

why are we measuring computers by human standards?

Because we want to know when we should be really worried - both from a "who is going to lose their job?" point of view, and for us doomers, an existential risk point of view as well. The reason why doomers like me find this question relevant is because we believe there is a risk that when AGI is created, it will be able to recursively self-improve itself up to an artificial superintelligence, perhaps in a matter of weeks or months. Though more likely substantial hardware advancement would be required, which I guess would mean years or decades instead. And artificial superintelligence would be really scary because it could be almost impossible to control - again, given certain debatable assumptions, like that it could cross over into other datacentres, or bribe or threaten people to let it do so.

But remember, we are talking about AI risks here, not AI certainties. The fact that some of these assumptions might not hold true is not actually much comfort if we think that they have, say, a 90% chance of coming to pass.

The idea of a ‘singularity,’ of exponential technological growth so exponentially fast it basically happens in an instant is historically ignorant, that’s just not how things work. 

I agree with you on this, and this is where I part company with Yudkowsky. However, I don't think this belief is essential to AI doomerism - it just dictates whether we're going to have some period of time to figure out how to stop an unaligned AI (my view) or no time at all (Yudkowsky's view). But that may not be terribly relevant in the final analysis - because, as I already discussed previously, it may not be possible to stop an unaligned ASI once it's been created and switched on and escaped from any "box" it may have been contained in, even if we had infinite time available to us.

And it's worth noting that Ray Kurzweil didn't mean the definition you gave by the Singularity - he just meant a point where progress is so fast it's impossible to predict what will happen in detail before it starts.

This idea of sci-fi predictive powers crops up again and again in doomer thinking. It’s core to the belief about how computers will become unstoppable and it’s core to their certainty that they’re right. 

We already have uncensorable, untrackable computer networks like Tor. We already have uncensorable, stochastically untrackable cryptocurrency networks like Monero. We have already seen computer viruses (worms) that spread in an uncontrolled manner around the internet given widespread security vulnerabilities that they can be programmed to take advantage of - and there are still plenty of those. We already have drones that could be used to attack people. Put all these together... maybe we could be dealing with a hard-to-control AI "infestation" that is trying to use drones or robots controlled over the internet to take out people and ultimately try to take over the world. The AI doesn't even have to replicate itself around the internet to every computer, it can just put simple "slave" processes in regular computers, creating a botnet under its exclusive control, and then replicate itself a few times - as long as it can keep hopping from datacentre to datacentre and it can keep the number of instances of itself above zero at any one time, it survives, and as long as it has some kind of connection to the internet, even just the ability to make DNS queries, it might in principle be able to control its "slave processes" and take action in the world even as we try desperately to shut it down.

Hypothetical thinking is core to what it means to be human! It separates us from simpler creatures! It's what higher intelligence is all about! Just because this is all hypothetical, doesn't mean it can't happen!

We're not "certain" that we're right in the faith-based way that religious people are certain that they're right about God existing - we're highly confident that we're right to be concerned about existential risk because of our rough-and-ready assessment of the probabilities involved, and the fact that not all of our arguments are essential to our conclusion (even if nanobots won't kill us we might still be killed by some other technique once the AI has automated its entire supply chain, etc.)

With existential risk, even a 1% risk of destroying the human species is something we should worry about - obviously, given a realistic path from here to there which explains how that could happen.

Why would the aliens put all their resources into weapons, rather than say into entertainment?

You're effectively asking why the AIs would not choose to entertain themselves instead of fighting with us.

Present-day computers have no need to entertain themselves, and I see no reason why future AI systems would be any different. Effective altruists, like other human beings, are best advised to have fun sometimes, as our bodies and minds get tired and need to unwind, but probably AIs and robots will face no such constraints.

As for fighting... or, as Eliezer would have it, taking us all out in one feel swoop...

why would the aliens want our resources if they have unlimited themselves?

You're effectively asking why the AIs would want our resources (e.g. the atoms in our bodies) if they have unlimited resources themselves. Well, this is kind of conflating two different things. I'm pretty sure an ASI could figure out how to generate enough cheap energy for all its needs, because we're quite close to doing that ourselves as it is (nuclear fusion is 30 years away, hehe). But obviously an ASI wouldn't have unlimited atoms, or unlimited space on Earth. Our bodies would contain atoms that it could use for something else, potentially, and we'd be taking up space that it could use for something else, potentially.

Nobody needs that many iPhones.

Yes, but the AI doesn't know this unless you tell it - that's the point of this wildly popular educational game about AI doom, which in turn was based on a famous thought experiment by Bostrom and/or Yudkowsky. I mean, the AI may know it, but even if it knows that on some level, if some idiot has given it a goal to simply maximise the production of iPhones, it's not going to stop when everyone on Earth has one and a spare. Because as I've just stated it, its goal doesn't say anything about stopping, or what's enough.

And while you may think that would be easy enough to fix, there are so many other ways that an AI can be misaligned, it's depressing. For example, suppose you set your AI humanoid robot a goal of cooking you and your child dinner, and you remember to tell it what counts as enough dinner, and you remember to tell it not to kill you. Oops, you forgot to mention not to kill your child! Rather than walking around your infant that happens to be crawling around on the floor, it treads on it, killing it, because that's a more efficient route to the kitchen cupboard to get an ingredient it needs to cook dinner.

Doomer: “This sounds like Pascal’s mugging.”

Muggle: “Now you’re getting it.”

In the context of climate change, are predictions about climate change decades in the future similarly presupposing "things that are fictional",

So no, climate change is something that seems similar but is only superficially. As I understand we now have the historic data that temperatures are rising and we have the historic data that this could mean many bad things. No computers are currently running around killing people of their own free will.

I think everyone in this debate would agree that it is harder to predict what AGIs (artificial general intelligences) and ASIs might do and how they might think and behave, than it is to make scientifically-justified climate models,

I would very much disagree with this. All the historic data shows that computers can be easily controlled, risk of death is very low (self driving cars are safer than human driven cars for example) and make our lives easier. The effects of climate change range from the very bad to the good.

I suppose that may be true, but if your view is that we definitely won't lose control of computers at all, ever, that is quite a hard claim to defend.

Historically there is not one example of a computer doing anything other than what it was programmed to do. This is like arguing that aliens will turn up tomorrow. There is no evidence.

password to control the robot is lost forever

The robot is still simply doing what it was programmed to do. I agree that terrorists getting their hands on super weapons, including AI powered ones (for example using AI to create new viruses) is extremely dangerous. But that is not a sci fi scenario, our enemies getting hold of weapons we’ve created is common in history.

An AI that is trying to fufill a simply-stated goal like "maximise iPhone production" would want to keep itself in existence and running, because if it no longer exists, its goal is perhaps less likely to be fulfilled

So this is a common argument that doesn’t make sense economically or from a safety view point. In order for an iPhone factory to be able to prevent itself from being turned off what capabilities would it require? Well it would need some way presumably to stop us humans from cutting its cables. I’d presume therefore that it would need autonomous armed guards. To prevent airstrikes on the factory maybe it would need an AA battery. But neither of those things are required for an iPhone factory. If you’ve programmed an iPhone factory with the capability to refuse to be turned off and given it armed robot drones, and AA guns then you’re an idiot. We already have iPhone factories that work just fine without any of those things. It doesn’t make sense from an economic resource utilisation point of view to upgrade then with dangerous stuff they don’t need.

I’ve heard similar arguments about “What if the AI fires off all the nukes?” Don’t give a complex algorithm control of the nukes in the first place!

A simpler scenario that might help understanding is the election system. Tom Scott had a great video on this. Why is election security so much more contentious in America than Britain? Because Americans are too lazy to do hand counting and use all sorts of computer systems instead. These systems are more hackable than the paper and pen and hand counting we use in the UK. But the important thing to understand here is that none of these scenario’s are the fault of any ‘super-intelligence’ but rather typical human super-stupidity.

I submit that it also doesn't make sense when people don't know how to build the thing and probably aren't immediately about to build the thing, but might actually build the thing in 2-5 years time!

I disagree and it’s something I find rather cringe about the whole ‘AI alignment’ field. For one thing, something isn’t useful of profitable until it’s safe. For instance we talk often about having ‘self driving’ cars in the future. But we’ve had self driving cars from the very beginning! I can go out to my ole gas guzzler right now, put a brick on the accelerator and it will drive itself into a wall. What we actually mean by ‘self driving cars’ is ‘cars that can drive themselves safely.’ THIS is what Tesla, Apple and Google are all working on. If you set up an outside organisation to ‘make sure AI self driving cars were safe’ people would think you were crackers because who would drive in an unsafe self driving car? Unsafe AI in 90%+ of cases will simply not be economically viable because why would you use something that’s unsafe when you already have the existing whatever it is that does the same thing safely (just slower or whatever.)

No, it isn't - because banking, scintillating as it may be, is not a general task, it's a narrow domain - like chess, but not quite as narrow.

Everything is a narrow domain. No I will not explain further lol.

why are we measuring computers by human standards? Because we want to know when we should be really worried - both from a "who is going to lose their job?" point of view, and for us doomers, an existential risk point of view as well.

Anthropomorphising

We already have uncensorable, untrackable computer networks like Tor. We already have uncensorable, stochastically untrackable cryptocurrency networks like Monero

Why does the existence of these secure networks make you more worried about AI and not less?

We have already seen computer viruses (worms) that spread in an uncontrolled manner around the internet given widespread security vulnerabilities that they can be programmed to take advantage of - and there are still plenty of those

I haven’t had a computer virus in years. I’m sure AIs will create viruses and businesses will use AI to create ways to stop them. My money is on the side with more money which is the commercial and government side not the leet hackers.

A super AI virus is a realistic concern released by China or terrorists. It’s not a realistic concern that it creates itself from it’s own will.

You're effectively asking why the AIs would not choose to entertain themselves instead of fighting with us.

No, I’m actually asking why us humans would allow our resources all to go into computers instead of things we want?

We’re not going to allow AIs to mine the moon to make themselves more powerful for instance, if we have that capability we’ll have them mine it to make space habitats instead.

Oops, you forgot to mention not to kill your child!

Again this is human stupidity NOT AI super intelligence. And this is the real risk of AI!

We can go back to the man that killed himself because the chatbot told him too. There were two humans being stupid there. First the designers of the App who made a chatbot that was designed to be an agreeable friend. But they were so stupid they forgot to ask themselves ‘What if it agrees with someone suicidal?’ For all we know they’ve also forgot to ask themselves ‘What if it agrees with someone who wants to do an act of terrorism?’ They should have foreseen this but they didn’t because we’re stupid monkeys.

Then there is the man himself who instead of going to a human with his issues went to a frigging chat bot who gave him advice no human would ever give him. He also seems to have on some level believed the chat bot was real or sentient and that’s influenced his behaviour. He’s also given waaay too much credence to an algorithm designed simply to agree with him.

Now ask yourself, who would have foreseen this situation. Eliezer Yudkowsky who believes he is super intelligent, AIs will be even more super intelligent and anthropomorphises them constantly? I could absolutely see Eliezer killing himself because a chatbot told him too.

Or me who believes AIs are stupid, humans are stupid and thinking AIs are alive is really stupid?

Let’s go back to Wuhan... Was the real problem that humans were behaving as gods and we were eaten by our own superior creations? No! It’s that we’re stupid monkeys who were to lazy to close the laboratory door!

One of the main stupid things we are doing is anthropomorphising these things. This leads humans to think the computers are capable of things that they aren’t.

The fear this provokes is probably not that dangerous but the trust it engenders is very dangerous.

That trust will lead to people putting them in charge of the nukes or people following the advice of a Chatbot created for ISIS or astrologers.

Great discussion! I appreciate your post, it helped me form a more nuanced view of AI risk rather than subscribing to full-on doomerism.

I would, however, like to comment on your statement - "this is human stupidity NOT AI super intelligence. And this is the real risk of AI!"

I agree with this assessment, moreover, it seems to me that this "human stupidity" problem of our inability to design sufficiently good goals for AI is what the Alignment field is trying to solve. 

It is true that no computer program has its own will. And there is no reason to believe that some future superintelligent program will suddenly stop following its programming instructions. However, given our current models that optimize for a vague goal (like in the example below), we need to develop smart solutions to encode our "true intentions" correctly into these models. 

I think it's best explained with an example: GPT-based chatbots are simply trained to predict the next word in a sentence, and it is not clear at a technical level how we can modify such a simple and specific goal of next word prediction to also include broad, complex instructions like "don't agree with someone suicidal". Current alignment methods like RLHF help to some extent, but there are no existing methods that guarantee, for example, that a model will never agree with someone's suicidal thoughts. Such a lack of guarantees and control in our current training algorithms, and therefore our models, is problematic. And it seems to me this is the problem that alignment research tries to solve. 

The idea of 'alignment' presupposes that you cannot control the computer and that it has its own will so you need to 'align it' ie incentivise it.  But this isn't the case, we can control them.  

It's true that machine learning AIs can create their own instructions and perform tasks however we still maintain overall control.  We can constraint both inputs and outputs.  We can nest the 'intelligent' machine learning part of the system within constraints that prevent unwanted outcomes.  For instance ask an AI a question about feeling suicidal now and you'll probably get an answer that's been written by a human.  That's what I got last time I checked and the conversation was abrupty ended.  

Low effort post is low effort? I don't think the religion/cult analogy is worth much. At least from a personal perspective, there's a lot of things I'd rather be concerned over than AI risk (environmental degradation! international politics!), and - since the AI doom argument just makes sense to me - I don't get to spend much time worried about anything else. 

More than anything else, my emotional reaction these days is to be somewhat annoyed at the people driving things forward, since I just see it as a social coordination problem that we're not doing too well on at the moment.

More from Murphy
Curated and popular this week
Relevant opportunities