Hide table of contents

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

This week we’re looking closely at AI legislative efforts in the United States, including:

  • Senator Schumer’s AI Insight Forum
  • The Blumenthal-Hawley framework for AI governance
  • Agencies proposed to govern digital platforms
  • State and local laws against AI surveillance
  • The National AI Research Resource (NAIRR)

Subscribe here to receive future versions.


Senator Schumer’s AI Insight Forum

The CEOs of more than a dozen major AI companies gathered in Washington on Wednesday for a hearing with the Senate. Organized by Democratic Majority Leader Chuck Schumer and a bipartisan group of Senators, this was the first of many hearings in their AI Insight Forum. 

After the hearing, Senator Schumer said, “I asked everyone in the room, ‘Is government needed to play a role in regulating AI?’ and every single person raised their hands.” Elon Musk, CEO of xAI, called the hearings “a great service to humanity.” 

Senator Josh Hawley raised concerns that despite the hearings, “nothing is advancing” in terms of legislation. Below, we’ll discuss several bills on AI policy which have been introduced to Congress, none of which have come to a vote. 

The Blumenthal-Hawley Framework

Senator Hawley recently introduced a framework for AI legislation alongside Senator Richard Blumenthal. The pair lead the Senate Judiciary Subcommittee on Privacy, Technology and the Law, and have hosted three hearings on AI policy over the last five months. 

The Blumenthal-Hawley framework recommends:

  • Licensing. General purpose AI systems and AIs used in high-risk situations should be required to obtain a license from an independent oversight body. There should be strong rules against conflicts of interest to prevent regulatory capture. 
  • Legal Liability. Congress should clarify that Section 230 does not apply to AI (Sens. Blumenthal and Hawley have introduced a bill that would accomplish this), and hold AI companies liable for emerging harms such as deepfakes and election interference. 
  • Transparency. Companies should be required to publicly report details about their training data, and label AI outputs with easily identifiable watermarks. The federal AI oversight body should maintain a database of AI harms. 
  • Human in the loop. Users should be notified when they are interacting with an AI system, and should have the right to a human review of AI decisions. 
  • AI Proliferation. To keep rogue actors and adversary nations from obtaining frontier AI systems, the US should utilize export controls, sanctions, and other restrictions.

This framework was endorsed by Microsoft President Brad Smith in a hearing last week. In California, state Senator Scott Wiener introduced a similar bill of intent for state legislation on AI. The details of both proposals still need to be fleshed out into concrete policies for AI governance.

Agencies Proposed to Govern Digital Platforms

Whereas AI policy discussions are often conceptual and lack concrete legislative proposals, there are many concrete proposals to regulate digital platforms including social media. These bills could affect AI developers, and offer insights for those crafting AI legislation. 

In June, a bill that would create a federal agency to govern digital platforms was introduced by Democratic Senators Michael Bennet and Peter Welch. The agency would have “a broad mandate to promote the public interest” via methods including rules, civil penalties, hearings, investigations, and research. Senators Elizabeth Warren and Lindsey Graham introduced a similar bill in July requiring digital platforms to obtain a federal license each year. 

These bills would likely apply to AI companies, but many AI-specific policies are not included in these bills. For example, they do not require companies to disclose their training data, nor mandate red-teaming and evaluations before the release of new AI models. The federal agency could work on making rules about AI systems, but amending the bill itself to address AI-specific concerns could provide a clearer mandate for the new agency.

Despite public support for regulating AI and social media, there is no guarantee that these bills will become law. Senator Ted Cruz recently came out against AI regulation, and others may follow. Last year, no vote was held on two bills with broad bipartisan support to regulate technology companies after the industry spent $37 million lobbying against them.

Deepfakes and Watermarking Legislation

An AI-specific proposal put forth in several recent bills intends to combat deepfakes and AI-generated misinformation. By using AI to fabricate text, images, videos, and audio, scammers have run fake kidnapping scams and even stole $600,000 from a Chinese businessman. More powerful AI systems could soon be used for widespread misinformation campaigns of persuasion and deception.

Several recent bills intend to combat AI deepfakes with clearly visible notices of AI-generated content. The DEEP FAKES Accountability Act of 2019 first proposed the idea, and the AI Labeling Act of 2023 proposes a similar requirement where the primary responsibility for providing benchmarks would be placed on AI developers rather than downstream users. The REAL Political Advertisements Act is more narrowly targeted, mandating disclosure of AI-generated content only in political advertisements. 

Clearly visible notices of AI-generated content might be intrusive or unpleasing to viewers. Another potential solution is watermarking, the practice of embedding statistical patterns in AI outputs which are typically unnoticed by a consumer, but which can be detected using a watermark detection tool. The National Defense Authorization Act for 2024 directed DARPA to hold a prize competition for the development of AI watermarking techniques. 

State and Local Laws Against AI Surveillance

While federal laws tend to get the most attention, it's important to note that states and local governments are also playing a role in AI regulation, particularly against AI surveillance.

Alabama, Colorado, Washington, and Baltimore, Maryland have all passed laws restricting police use of facial recognition technology. Some have banned it outright, while others require a search warrant and prohibit its use in sensitive locations such as schools. 

These state and local initiatives highlight concerns about civil liberties and the willingness of states and cities to check the unbridled use of AI by law enforcement agencies.

National AI Research Resource (NAIRR)

The CREATE AI Act is a bipartisan bill which has been designed and developed with strong support over the last few years. It would establish a National AI Research Resource (NAIRR) to provide compute and data to AI researchers outside of industry.

The bill states that priority access will be given to research projects focused on “privacy, ethics, civil rights and civil liberties, safety, security, risk mitigation, and trustworthiness.” Today, only 2% of AI research focuses on key topics relevant for safety. But by targeting these neglected topics in AI safety, NAIRR could substantially boost the amount of research in the field.

Creating “testbeds'' for evaluating AI systems is another statutory requirement for NAIRR. They would be tasked with working with NIST to “develop a comprehensive catalog of open AI testbeds.” These could include evaluations of extreme risks such as dangerous capabilities and misalignment. Perhaps a future federal regulator could also benefit from access to a suite of AI evaluation testbeds. 

Decisions about which projects to support would be made by federal agencies, as well as an “operating entity” – likely a university or federally funded research and development center – chosen to run the day-to-day operations of the cluster. The work would be overseen by the National Science Foundation and expert advisory committees. 

  • The President of the European Union says that “mitigating the risk of extinction from AI should be a global priority.”
  • The TIME100 list on AI included Dan Hendrycks, Director of the Center for AI Safety, alongside others including Sam Altman, Eric Schmidt, and Ilya Sutskever. 
  • AI scientist Richard Sutton alarmingly says that AIs “could displace us from existence” and that  “we should not resist succession.” 
  • Chinese companies have designed and manufactured a new 7nm computer chip. 
  • China spreads AI-generated images of the fires in Hawaii, falsely claiming they were caused by an American “weather weapon.” 
  • Israeli Prime Minister Benjamin Netanyahu holds a roundtable discussion on AI with Elon Musk, Max Tegmark, and Greg Brockman. 
  • The Atlantic interviews Sam Altman on his work at OpenAI. 
  • An autonomous AI system defeated human pilots in racing drones.
  • Less than 15% of AI grants from the National Science Foundation support trustworthy AI topics including robustness, interpretability, and fairness, finds a recent analysis. 

See also: CAIS website, CAIS twitter, A technical safety research newsletter, An Overview of Catastrophic AI Risks, and our feedback form

Subscribe here to receive future versions.

Comments1


Sorted by Click to highlight new comments since:

Executive summary: This post provides an overview of AI legislative efforts in the United States, including hearings, frameworks, bills, and proposed laws.

Key points:

  1. Senator Schumer organized an AI Insight Forum where major AI company CEOs expressed the need for government regulation of AI.
  2. Senators Blumenthal and Hawley introduced a framework for AI legislation that recommends licensing, legal liability, transparency, human involvement, and AI proliferation controls.
  3. Concrete proposals to regulate digital platforms, which could affect AI companies, have been introduced in Congress.
  4. Bills aimed at combating deepfakes and AI-generated misinformation propose the use of visible notices or watermarking.
  5. Some states and local governments have passed laws restricting police use of facial recognition technology.
  6. The National AI Research Resource (NAIRR) bill seeks to establish a resource for AI researchers and prioritize research on safety-related topics.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 1m read
 · 
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
 ·  · 2m read
 · 
Project for Awesome (P4A) is a charity video contest running from February 11th to February 19th, 2025. The public can vote on videos supporting various charities, and the ones with the most votes receive donations. Thanks to the support of the EA community, three EA charities received $37,000 each last year. Please help generate additional donations for EA charities again this year with just a few clicks! Voting is open until Wednesday, February 19th at 11:59 AM EST. You can find more information about P4A in this EA Forum post. On the P4A website, there are numerous videos showcasing different charities, including several EA charities. Feel free to watch the videos and cast your votes. Here’s how it works: „Anyone can go to the homepage of projectforawesome.com to see all videos. You can sort by charity category, pick from a dropdown of organization names, or search for a specific video. After you click on a video, look for a big red “VOTE” button either next to or below the video. You’ll have to check an “I’m not a robot” box, too.“ This year, there’s a new rule: „Our voting rule for Project for Awesome 2025 is one vote per charitable organization per device.“ So, you can vote for all the charities you want. List of videos about EA charities If you can’t find videos of EA-aligned charities directly, here’s a list: * Access to Medicines Initiative (Vote here) * ACTRA (Vote here) * Against Malaria Foundation (Vote here) * Animal Advocacy Africa (Vote here) * Animal Advocacy Careers (Vote here or here) * Animal Charity Evaluators (Vote here or here) * Animal Equality (Vote here) * Aquatic Life Institute (Vote here or here) * Center for the Governance of AI (Vote here) * Faunalytics (Vote here or here) * GiveDirectly (Vote here) * Giving What We Can (Vote here or here) * Good Food Institute (Vote here or here or here) * International Campaign to Aboli