Karl von Wendt

314 karmaJoined


I agree that some "doomers" (you may count me as one) are "pessimistic", being biased towards a negative outcome. I can't rule out that I'm "overly cautious". However, I'd argue that this is net positive for AI safety on the same grounds that I think optimism as I defined it is net positive under different circumstances, as described.

I agree that the word "optimism" can be used in different ways, that's why I gave a definition of the way I usually use it. My post was a reaction to Pope and Belrose, but as I stated, not about their arguments but generally about being "optimistic" in the way I defined it. Nora Belrose said in a comment on LessWrong that my way of defining optimism is not how they meant it, and as long as I don't analyze their texts, I have to accept that. But I think my definition of optimism fits in the range of common uses of the word (see Wikipedia, for example). All I did was trying to point out that this kind of "positive outcome bias" may be beneficial under certain circumstances, but not for thinking about AI safety.

I believe that if Pope and Belrose try to have a truly rational and unbiased stance, the term "AI Optimism" is at least misleading, as it can be understood in the way I have understood it. I hope this post is at least helpful in the sense that I have pointed that possible misunderstanding out.

I don't think that there's a huge difference. As long as there aren't very strong fact-based arguments for exactly how likely it is that we will be able to control AGI, my definition of "optimists" will end up with a significantly higher probability of things going well. From what I read, I believe that Belrose and Pope have this basic bias towards "AGI is beneficial" and weigh the upside potential higher than the downside risks. They then present arguments in favor of that position. This is of course just an impression, I can't prove it. In any case, even if they genuinely believe that everything they say is correct, they still should put in massive caveats and point out where exactly their arguments are weak or could be questioned. But that is not what a self-declared "optimist" does.  So, instead they just present their beliefs. That's okay, but it is clearly a sign of optimism the way I define it.

I have thought something similar (without having read about it before), given the large percentage of people who were willing to change their minds. But I think the exact percentage of the shift, if there was one at all, isn't really important. I think you could say that since there wasn't a major shift towards x-risk, the debate wasn't going very well from an x-risk perspective. 

Imagine you're telling people that the building you're in is on fire, the alarm didn't go off because of some technical problem, and they should leave the building immediately. If you then have a discussion and afterwards even just a small fraction of people decides to stay in the building, you have "lost" the debate.

In this case, though I was disappointed, I don't think the outcome is "bad", because it is an opportunity to learn. We're just at the beginning of the "battle" about the public opinion on AI x-risk, so we should use this opportunity to fine-tune our communications. That's why I wrote the post. There's also this excellent piece by Steven Byrnes about the various arguments.

I think it's perfectly fine to (politely) call bullshit, if you think something is bullshit, as long as you follow it up with arguments as to why you think that


(which she did, even if you think the arguments were weak)

That's where we disagree - strong claims ("Two Turing-award winners talk nonsense when they point out the dangerousness of the technology they developed") require strong evidence.

I think calling their opinions "ungrounded speculation" is an entirely valid opinion, although I would personally use the more diplomatic term "insufficiently grounded speculation".

I disagree on that. Whether politely said or not, it disqualifies another's views without any arguments at all. It's like saying "you're talking bullshit". Now, if you do that and then follow up with "because, as I can demonstrate, facts A and B clearly contradict your claim", then that may be okay. But she didn't do that. 

She could have said things like "I don't understand your argument", or "I don't see evidence for claim X", or "I don't believe Y is possible, because ...". Even better would be to ask: "Can you explain to me why you think an AI could become uncontrollable within the next 20 years?", and then answer to the arguments.

I don't think your "heuristic" vs "argument" distinction is sufficiently coherent to be useful. I prefer to think of it all as evidence, and talk about the strength of that evidence.

I agree in principle. However, I still think that there's a difference between a heuristic and a logical conclusion. But not all heuristics are bad arguments. If I get an email from someone who wants to donate $10,000,000 to me, I use the heuristic that this is likely a scam without looking for further evidence. So yeah, heuristics can be very helpful. They're just not very reliable in highly unusual situations. In German comments, I often read "Sam Altman wants to hype OpenAI by presenting it as potentially dangerous, so this open letter he signed must be hype". That's an example of how a heuristic can be misleading. It is ignoring the fact, for example, that Yoshua Bengio and Geoffrey Hinton also signed that letter.

You talk about Tegmark citing recent advances in AI as " concrete evidence" that a future AGI will be world domination capable.

No. Tegmark cites this as concrete evidence that a future uncontrollable AGI is possible and that we shouldn't carelessly dismiss this threat. He readily admits that there may be unforeseen obstacles, and so do I.

Who is right? You can't figure that out by the semantics of "heuristics". To get an actual answer, you have to dig into actual research on capabilities and limitations, which was not done by anybody in this debate (mainly because it would have been too technical for a public-facing debate). 

I fully agree.

I definitely disagree with the OP that Mitchell was being "dismissive" for stating her honest belief that near-term AGI is unlikely. This is a completely valid position held by a significant portion of AI researchers. 

I didn't state that. I think Mitchell was "dismissive" (even aggressively so) by calling the view of Tegmark, Bengio, and indirectly Hinton and others "ungrounded speculations". I have no problem with someone stating that AGI is unlikely within a specific timeframe, even if I think that's wrong.

I agree with most of what you wrote about the debate, although I don't think that Mitchell presented any "good" arguments.

intelligent AI would be able to figure out what we wanted

It probably would, but that's not the point of the alignment problem. The problem is that even if it knows what we "really" want, it won't care about it unless we find a way to align it with our values, needs, and wishes, which is a very hard problem (if you doubt that, I recommend watching this introduction). We understand pretty well what chickens, pigs, and cows want, but we still treat them very badly.

Thank you for clarifying your view!

Take the heuristic that Tegmark employed in this debate: that the damage potential of human weapons has increased over time. He talks about how we went from sticks, to guns, to bombs, killing dozens, to hundreds, to millions. 

This is undeniably a heuristic, but it's used to prime people for his later logical arguments as to why AI is also dangerous, like these earlier technologies. 

This is not a heuristic. It would be a heuristic if he had argued "Because weapons have increased in power over time, we can expect that AI will be even more dangerous in the future". But that's not what he did if I remember it correctly (unfortunately, I don't have access to my notes on the debate right now, I may edit this comment later). However, he may have used this example as priming, which is not the same in my opinion.

Mitchell in particular seemed to argue that AI x-risk is unlikely and talking about it is just "ungrounded speculation" because fears have been overblown in the past, which would count as a heuristic, but I don't think LeCun used it in the same way. But I admit that telling it apart isn't easy.

The important point here is not so much whether using historical trends or other unrelated data in arguments is good or bad, it's more whether the argument is built mainly on these. As I see it:

Tegmark and Bengio argued that we need to take x-risk from AI seriously because we can't rule it out. They gave concrete evidence for that, e.g. the fast development of AI capabilities in the past years. Bengio mentioned how that had surprised him, so he had updated his probability. Both admitted that they didn't know with certainty whether AI x-risk was real, but gave it a high enough probability to be concerned. Tegmark explicitly asked for "humbleness": Because we don't know, we need to be cautious.

LeCun mainly argued that we don't need to be worried because nobody would be stupid enough to build a dangerous ASI without knowing how to control it. So in principle, he admitted that there would indeed be a risk if there was a probability that someone could be stupid enough to do just that. I think he was closer to Bengio's and Tegmark's viewpoints on this than to Mitchell's.

Mitchell mainly argued that we shouldn't take AI x-risk seriously because a) it is extremely unlikely that we'll be able to build uncontrollable AI in the foreseeable future and b) talking about x-risk is dangerous because it takes away energy from "real" problems. a) was in direct contradiction to what LeCun said. The evidence she provided for it was mainly a heuristic ("people in the 60's thought we were close to AGI, and it turned out they were wrong, so people are wrong now") and an anthropomorphic view ("computers aren't even alive, they can't make their own decisions"), which I would also count as a heuristic ("humans are the only intelligent species, computers will never be like humans, therefore computers are very unlikely to ever be more intelligent than humans"), but this may be a misrepresentation of her views. In my opinion, she gave no evidence at all to justify her claim that two of the world's leading AI experts (Hinton and Bengio) were doing "ungrounded speculation". b) is irrelevant to the question debated and also a very bad argument IMO.

I admit that I'm biased and my analysis may be clouded by emotions. I'm concerned about the future of my three adult sons, and I think people arguing like LeCun and, even more so, Mitchell are carelessly endangering that future. That is true for your own future as well, of course. 

I agree that we should always be cautious when dismissing another's arguments. I also agree that some pro-x-risk arguments may be heuristics. But I think the distinction is quite important.

A heuristic is a rule of thumb, often based on past experience. If you claim "tomorrow the sun will rise in the east because that's what it has always done", that's a heuristic. If you instead say "Earth is a ball circling the sun while rotating on its axis; reversing this rotation would require enormous forces and is highly unlikely, therefore we can expect that it will look like the sun is rising in the east tomorrow", that's not a heuristic, but a logical argument.

Heuristics work well in situations that are more or less stable, but they can be misleading when the situation is unusual or highly volatile. We're living in very uncommon times, therefore heuristics are not good arguments in a discussion about topics like AI safety.

The claim "people have always been afraid of technology, and it was always unfounded, so there is no need to be afraid of AI x-risk now" is a typical heuristic, and it is highly misleading (and also wrong IMO - the fear of a nuclear war is certainly not unfounded, for instance, and some of the bad consequences of technology, like climate change and environmental destruciton, are only now becoming apparent).

The problem with heuristics is that they are unreliable. They prove nothing. At the same time, they are very easy to understand, and therefore very convincing. They have a high potential of misleading people and clouding their judgements. Therefore, we should avoid and fight them wherever we can, or at least make it transparent that they are just rules of thumb based on past experience, not logical conclusions based on a model of reality.

Edit: I'd be interested to hear what the people who disagree with this think. Seriously. This may be an important discussion.

I think the value of these debates lies in normalising this issue as one that is valid to have a debate about in the public sphere. These debates aren't confined to LessWrong or in-the-know Twitter sniping anymore, and I think that's unironically a good thing. 

I agree.

I think you're a bit too hasty to extrapolate what the 'AI Safety Side' strategy should be. 

That may well be true.

Personally think that Stuart Russell would be a great spokesman for the AI-risk-is-serious side, impeccable credentials, has debated the issue before (see here vs Melanie), and I think his persepective on AI risk lends itself to a "slow-takeoff" framing rather than a "hard-takeoff" framing which Bengio/Hinton/Tegmark etc. seem to be pushing more.

Yes, he definitely would have been a good choice.

Load more