I fully support a pause, however that is enacted, until we find a way to ensure safety.
I think part of the reasons so many people do not consider a pause not only reasonable but actually self-evidently the right thing to do is related to the specific experience of some of the people on the forum.
A lot of people engaging in this debate naturally come from an AI or tech background. Or they've followed the fortunes of Facebook and Amazon and Microsoft from a distance and seen how they've been allowed to do pretty much whatever they want. Any proposal to limit tech innovation may seem shocking. Because tech has had an almost regulation-free ride until now. And other groups in the public eye, such as banks and investment firms have paid off enough people in congress to eliminate most of their regulations too.
But this is very much NOT the norm.
But if you look at, say, the S&P 500, you'll see maybe 30 tech companies or banks, and a few others, which face very little regulation. But many more companies who are very used to being very strictly regulated.
Pharma companies are used to discovering miracle drugs but still having to go through decades (literally!) of safety testing before they can make them available to the public, and even then they still need FDA audits to prove that they are producing exactly what they said, how they said they would. Any change can take another few years to get approved.
Engineers and Architects know that every major design they create needs to be reviewed by countless bodies who effectively have a right to deny approval - and the burden of proof is always on the side of those who want to go ahead.
If you try to get a new chemical approved for use in food, it is such a long and costly process that most companies just don't bother even trying.
This is how the world works. There is this mentality among tech people that they somehow have the right to innovate and put out products with no restrictions as if this as everyone's god-given right. But it's not.
So maybe people within tech have a can't do attitude (as Katja Grace called it) towards a pause, thinking it cannot work. But the world knows how to do pauses, how to define safety criteria and ways to make sure they are met before a product is released. Sure, the details for AI will be different than for Pharma, but is AI fundamentally more complex than the interactions of a new, complex chemical with a human body? It isn't obviously so.
The FDA and others have found ways to keep drugs safe, while still allowing phenomenal progress. It is frustrating as hell in the short term, but in the long run it works best for everyone - when you buy a drug, it is highly unlikely to do you harm in unexpected ways, and typically any harm it might do has been analysed, communicated to the medical community. So that you and your doctor know what the risks are.
It feels wrong for the AI community to believe that they deserve to be free of regulation when the risks are even greater than those from Pharma. And it feels like a can't do attitude for us to believe that a pause cannot happen or cannot be effective.
Executive summary: The author argues in favor of an international moratorium on developing artificially intelligent systems until they can be proven safe, responding to common objections.
Key points:
A moratorium would require AI systems to undergo safety reviews before release, not ban AI entirely. It could fail in various ways but would likely still slow dangerous AI proliferation.
Failure may not make things much worse - existing initiatives could continue and treaties can be amended. Doing nothing risks an AI arms race.
Success will not necessarily lead to dictatorship or permanently halt progress. Safe systems would be allowed and treaties can evolve if no longer relevant.
The benefits of AI do not justify rushing development without appropriate safeguards against existential risks.
The evidence for AI risk is not yet definitive but negotiating safety mechanisms takes time, so discussions should begin before it is too late.
Differences are largely predictive, not values-based - optimism versus pessimism about easy alignment. Evidence may lead to agreement over time with open-mindedness.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
I apologize that my intent here was unclear - I edited it to say "the treaties [being discussed] aren't permanent," which I thought was clear from context.
I fully support a pause, however that is enacted, until we find a way to ensure safety.
I think part of the reasons so many people do not consider a pause not only reasonable but actually self-evidently the right thing to do is related to the specific experience of some of the people on the forum.
A lot of people engaging in this debate naturally come from an AI or tech background. Or they've followed the fortunes of Facebook and Amazon and Microsoft from a distance and seen how they've been allowed to do pretty much whatever they want. Any proposal to limit tech innovation may seem shocking. Because tech has had an almost regulation-free ride until now. And other groups in the public eye, such as banks and investment firms have paid off enough people in congress to eliminate most of their regulations too.
But this is very much NOT the norm.
But if you look at, say, the S&P 500, you'll see maybe 30 tech companies or banks, and a few others, which face very little regulation. But many more companies who are very used to being very strictly regulated.
This is how the world works. There is this mentality among tech people that they somehow have the right to innovate and put out products with no restrictions as if this as everyone's god-given right. But it's not.
So maybe people within tech have a can't do attitude (as Katja Grace called it) towards a pause, thinking it cannot work. But the world knows how to do pauses, how to define safety criteria and ways to make sure they are met before a product is released. Sure, the details for AI will be different than for Pharma, but is AI fundamentally more complex than the interactions of a new, complex chemical with a human body? It isn't obviously so.
The FDA and others have found ways to keep drugs safe, while still allowing phenomenal progress. It is frustrating as hell in the short term, but in the long run it works best for everyone - when you buy a drug, it is highly unlikely to do you harm in unexpected ways, and typically any harm it might do has been analysed, communicated to the medical community. So that you and your doctor know what the risks are.
It feels wrong for the AI community to believe that they deserve to be free of regulation when the risks are even greater than those from Pharma. And it feels like a can't do attitude for us to believe that a pause cannot happen or cannot be effective.
Executive summary: The author argues in favor of an international moratorium on developing artificially intelligent systems until they can be proven safe, responding to common objections.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
I don't really know what you mean by this, but I never said treaties are permanent. Can you please not strawman me?
I apologize that my intent here was unclear - I edited it to say "the treaties [being discussed] aren't permanent," which I thought was clear from context.