The NYT just released a breaking news piece regarding an agreement on AI safeguards. It's hard to tell exactly how useful the proposed measures will be, but it seems like a promising step.

Seven leading A.I. companies in the United States have agreed to voluntary safeguards on the technology’s development, the White House announced on Friday, pledging to strive for safety, security and trust even as they compete over the potential of artificial intelligence.

The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — will formally announce their commitment to the new standards at a meeting with President Biden at the White House on Friday afternoon.

 

The voluntary safeguards announced on Friday are only an early step as Washington and governments across the world put in place legal and regulatory frameworks for the development of artificial intelligence. White House officials said the administration was working on an executive order that would go further than Friday’s announcement and supported the development of bipartisan legislation.

 

As part of the agreement, the companies agreed to:

  • Security testing of their A.I. products, in part by independent experts and to share information about their products with governments and others who are attempting to manage the risks of the technology.
  • Ensuring that consumers are able to spot A.I.-generated material by implementing watermarks or other means of identifying generated content.
  • Publicly reporting the capabilities and limitations of their systems on a regular basis, including security risks and evidence of bias.
  • Deploying advanced artificial intelligence tools to tackle society’s biggest challenges, like curing cancer and combating climate change.
  • Conducting research on the risks of bias, discrimination and invasion of privacy from the spread of A.I. tools.
     

61

0
0

Reactions

0
0
Comments4
Sorted by Click to highlight new comments since: Today at 4:02 PM

It's also very much worth reading the linked pdf, which goes into more detail than the fact sheet.

Pulling out highlights from PDF of the voluntary commitments that AI safety orgs agreed to:

The following is a list of commitments that companies are making to promote the safe, secure, and transparent development and use of AI technology. These voluntary commitments are consistent with existing laws and regulations, and designed to advance a generative AI legal and policy regime. Companies intend these voluntary commitments to remain in effect until regulations covering substantially the same issues come into force. Individual companies may make additional commitments beyond those included here.

Scope: Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (e.g. models that are overall more powerful than any currently released models, including GPT-4, Claude 2, PaLM 2, Titan and, in the case of image generation, DALL-E 2).

 

1) Commit to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas.

2) Work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards

3) Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights

4) Incent third-party discovery and reporting of issues and vulnerabilities

5) Develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content

6) Publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias

7) Prioritize research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination, and protecting privacy

8) Develop and deploy frontier AI systems to help address society’s greatest challenges

I'd be very curious if there are historical case studies of how much private corporations stuck to voluntary commitments they made, and how long it took for more binding regulation to replace voluntary commitments