Bengio and Hinton are the two most-cited researchers alive. Ilya Sutskever is the 3rd most cited AI researcher, and though he's not on that paper, the superalignment intro blog post from OpenAI says this, "Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." LeCun is probably the top AI researcher who's not worried about controlling a superintelligence (4th in total citations after Sutskever).
This is obviously a semantics disagreement, but I stand by the original claim. Note that I'm not saying that all the top AI researchers are worried about x-risk.
In regards to your overall point, it does not rebuts the idea that some people have been cynically exploiting AI fears for their own gain. I mean, remember that OpenAI was founded as an AI safety organisation. The actions of Sam Altman seem entirely consistent with someone hyping X-risk in order to get funding and support for OpenAI, then pivoting to downplaying risk as soon as ditching safety gets more profit. I doubt this applies to all people or even the majority, but it does seem like it's happened at least once.
I largely agree with this and alluded to this possibility here:
If AI companies ever needed to rely on doomsday fears to lure investors and engineers, they definitely don’t anymore.
I might write a separate piece on the best evidence for the hype argument, which OpenAI I think has been the biggest winner of. My guess is that Altman actually did believe what he was saying about AI risk back in 2015. Superintelligence came out the year before, and it's not a surprising view for him to have given what else we know about him.
I'd also guess that Altman and Elon are two of the people most associated with the x-risk story, which has been the biggest driver of skepticism about it.
There's also been more recent evidence of him ditching x-risk fears now that it seems convenient. From a recent Fox News interview:
Interviewer: “A lot of people who don’t understand AI, and I would put myself in that category, have got a basic understanding, but they worry about AI becoming sentient, about it making autonomous decisions, about it telling humans you’re no longer in charge?”
Altman: “It doesn’t seem to me to be where things are heading…is it conscious or not will not be the right question, it will be how complex of a task can it do on its own?”
Interviewer: “What about when the tool gets smarter than we are? Or the tool decides to take over?”
Altman: “I think tools in many senses are already smarter than we are. I think that the internet is smarter than you or I, the internet knows a lot of things. In fact, society itself is vastly smarter and more capable than any one person. I think we’re already good at working with tools, institutions, structures, whatever you want to call it, that are vastly more capable than one person and as long as we have a reasonably level playing field where one person or one company has vastly more power than anybody else, I think we know how to deal with that.”
Hsu clarified his position on my thread here:
"Clarifications:
1. The mafia tendencies (careerist groups working together out of self-interest and not to advance science itself) are present in the West as well these days. In fact the term was first used in this way by Italian academics.
2. They're not against big breakthroughs in PRC, esp. obvious ones. The bureaucracy bases promotions, raises, etc. on metrics like publications in top journals, cititations, ... However there are very obvious wins that they will go after in a coordinated way - including AI, semiconductors, new energy tech, etc.
3. I could be described as a China hawk in that I've been pointing to a US-China competition as unavoidable for over a decade. But I think I have more realistic views about what is happening in PRC than most China hawks. I also try to focus on simple descriptive analysis rather than getting distracted by normative midwit stuff.
4. There is coordinated planning btw govt and industry in PRC to stay at the frontier in AI/AGI/ASI. They are less susceptible to "visionaries" (ie grifters) so you'll find fewer doomers or singularitarians, etc. Certainly not in the top govt positions. The quiet confidence I mentioned extends to AI, not just semiconductors and other key technologies."
Pasted from LW:
Hey Seth, appreciate the detailed engagement. I don't think the 2017 report is the best way to understand what China's intentions are WRT to AI, but there was nothing in the report to support Helberg's claim to Reuters. I also cite multiple other sources discussing more recent developments (with the caveat in the piece that they should be taken with a grain of salt). I think the fact that this commission was not able to find evidence for the "China is racing to AGI" claim is actually pretty convincing evidence in itself. I'm very interested in better understanding China's intentions here and plan to deep dive into it over the next few months, but I didn't want to wait until I could exhaustively search for the evidence that the report should have offered while an extremely dangerous and unsupported narrative takes off.
I also really don't get the error pushback. These really were less technical errors than basic factual errors and incoherent statements. They speak to a sloppiness that should affect how seriously the report should be taken. I'm not one to gatekeep ai expertise, but idt it's too much to expect a congressional commission with a top recommendation to commence in a militaristic AI arms race to have SOMEONE read a draft who knows that chatgpt-3 isn't a thing.
Yeah, I got some pushback on Twitter on this point. I now agree that it's not a great analogy. My thinking was that we technically know how to build a quantum computer, but not one that is economically viable (which requires technical problems to be solved and for the thing to be scalable/not too expensive). Feels like a not all squares are rectangles, but all rectangles are squares thing. Like quantum computing ISN'T economically viable but that's not the main problem with it right now.
BTW, this link (Buzan, Wæver and de Wilde, 1998) goes to a PaperPile citation that's not publicly accessible.
Thanks Sarah!