OpenAI has a new blog post out titled "Governance of superintelligence" (subtitle: "Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI"), by Sam Altman, Greg Brockman, and Ilya Sutskever.
The piece is short (~800 words), so I recommend most people just read it in full.
Here's the introduction/summary (bold added for emphasis):
Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.
In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.
We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.
And below are a few more quotes that stood out:
"First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society."
"Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc."
"It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say."
"Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into."
"We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here"
"By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar."
"we believe it would be unintuitively risky and difficult to stop the creation of superintelligence"
This was hard to read, emotionally.
Some parts are good. I'm confused about why OpenAI uses euphemisms like
Maybe they're concerned that if they instead said things like "and quickly carry out as much scientific and economic advancement as thousands of years of progress at today's rate" then people would just not take it seriously?
(disclosure: gave feedback on the post/work at OAI)
I don't personally love the corporation analogy/don't really lean on it myself but would just note that IMO there is nothing euphemistic going on here-- the authors are just trying one among many possible ways of conveying the gravity of the stakes, which they individually and OAI as a company have done in various ways at different times. It's not 100% clear which are the "correct" ones both accuracy wise and effective communication wise. I mix things up myself depending on the audience/context/my current thinking on the issue at the time, and don't think euphemism is the right way to think about that or this.
This is quite surprising to me. For the record, I don't believe that the authors believe that "carry out as much productive activity as one of today’s largest corporations" is a good--or even reasonable--description of superintelligence or of what's "conceivable . . . within the next ten years."
And I don't follow Sam's or OpenAI's communications closely, but I've recently seemed to notice them declining to talk about AI as if it's as big a deal as I think they think it is. (Context for those reading this in the future: Sam Altman recently gave congressional testimony which [I think after briefly engaging with it] was mostly good but notable in that Sam focused on weak AI and sometimes actively avoided talking about how big a deal AI will be and x-risk, in a way that felt dishonest.)
(Thanks for engaging.)
(meta note: I don't check the forum super consistently so may miss any replies)
I think there's probably some subtle subtext that I'm missing in your surprise or some other way in which we are coming at this from diff. angles (besides institutional affiliations, or maybe just that), since this doesn't feel out of distribution to me--like, large corporations are super powerful/capable. Saying that "computers" could soon be similarly capable is pretty crazy to most people (I think--I am pretty immersed in AI world, ofc, which is part of the issue I am pointing at re: iteration/uncertainty on optimal comms) and loudly likening something you're building to nuclear weapons does not feel particularly downplay-y to me. In any case, I don't think it's unreasonable for you/others to be skeptical re: industry folks' motivations etc., to be clear--seems good to critically analyze stuff like this since it's important to get right--but just sharing my 2c.
IMHO this is quite an accurate and helpful statement, not a euphemism. I offer this perspective as someone who has worked many years in a corporate research environment - actually, in one of the best corporate research environments out there.
There are three threads to the comment:
Put all that together, and it's logical that once we have an AI that can do a specific domain task as well as a human (e.g. design and interpret simulated research into potentially interesting candidate molecules for drugs to fight a given disease), it is almost a no-brainer to reach the point where a corporation could use AI to massively accelerate their progress.
As AI gets closer to AGI, the domains in which AI can work independently will grow, the need for human involvement will decrease and the pace of innovation will grow. Yes, there will be some limits, like physical testing, where AI will still need humans, but even there robots already do much of the work, so human involvement is decreasing every day.
it's also important to consider who was saying this: OpenAI. So their message was NOT that AI is bad. What they wanted us to take away was that AI has huge potential for good - like the way it can accelerate the development of medical cures, for example - BUT that it is moving forward so fast and most people do not realise how fast this can happen, and so we (in the know) need to keep pushing the regulators (mostly not experts) to regulate this while we still can.