See the event page here.

Hello Californians!

We need you to help us fight for SB 1047, a landmark bill to help set a benchmark for AI safety, decrease existential risk, and promote safety research. This bill has been supported by some of the world’s leading AI scientists and the Center of AI Safety, and is extremely important for us to pass. As Californians, we have a unique opportunity to inspire other states to follow suit.

SB 1047 has a hearing in the Assembly Appropriations Committee scheduled for August 15th. Unfortunately, due to misinformation and lobbying by big tech companies, the bill risks getting watered down or failing to advance. This would be a significant blow against safety and would continue the “race to the bottom” in AI capabilities without any guardrails.

We need you to do the following to save the bill. This will take no more than 5 minutes:

This document has additional information about the bill and other ways to help. [But some of the dates are wrong! This post is up-to-date.]

Please try to get this done as soon as possible, and let us know if you need any help. Your voice matters, and it is urgent that we push this before it’s too late.

Thank you so much for your support!

102

5
2
3

Reactions

5
2
3
Comments6
Sorted by Click to highlight new comments since:

I've worked in political offices before like these and I can confirm that politicians respond to constituent outreach. Most people don't care enough about most issues enough to call or email their representatives, so if the office gets a flood of communication on one issue it means something. 

If you are a California resident, you should use the contact form the rep provides so you can provide your address (to prove you are a constituent and that they should care about your opinion). 

The chair of the appropriations committee, Rep. Buffy Wicks, represents Berkeley, and thus actually cares about your opinion if you live in Berkeley. If you care about this bill, you should call her office at (916) 319-2014, email her, and use her contact form here: https://a14.asmdc.org/email-assemblymember-wicks

Another Rep. on the appropriations committee that is probably relevant to many people here is Rep. Matt Haney, who represents San Francisco. You can call his office at (916) 319-2017, email him at Assemblymember.haney@assembly.ca.gov or contact him here: https://a17.asmdc.org/contact

If you have friends in these districts who care about this issue, you should tell them to contact their reps as well.

Thanks, guys! The Scott Wiener (who's sponsoring SB 1047) discussion with Sam Harris (along with Yoshua Bengio) gives some good context.

If anyone here also would like more context on this, I found @Garrison's reporting from 16 August quite insightful: 

The Tech Industry is the Biggest Blocker to Meaningful AI Safety Regulations

The Bill has passed the appropriations committee and will now move onto the Assembly floor.  There were some changes made to the Bill. From the press release

Removing perjury – Replace criminal penalties for perjury with civil penalties. There are now no criminal penalties in the bill. Opponents had misrepresented this provision, and a civil penalty serves well as a deterrent against lying to the government.

Eliminating the FMD – Remove the proposed new state regulatory body (formerly the Frontier Model Division, or FMD). SB 1047’s enforcement was always done through the AG’s office, and this amendment streamlines the regulatory structure without significantly impacting the ability to hold bad actors accountable. Some of the FMD’s functions have been moved to the existing Government Operations Agency.

Adjusting legal standards - The legal standard under which developers must attest they have fulfilled their commitments under the bill has changed from “reasonable assurance” standard to a standard of “reasonable care,” which is defined under centuries of common law as the care a reasonable person would have taken. We lay out a few elements of reasonable care in AI development, including whether they consulted NIST standards in establishing their safety plans, and how their safety plan compares to other companies in the industry.

New threshold to protect startups’ ability to fine-tune open sourced models – Established a threshold to determine which fine-tuned models are covered under SB 1047. Only models that were fine-tuned at a cost of at least $10 million are now covered. If a model is fine-tuned at a cost of less than $10 million dollars, the model is not covered and the developer doing the fine tuning has no obligations under the bill. The overwhelming majority of developers fine-tuning open sourced models will not be covered and therefore will have no obligations under the bill.

Narrowing, but not eliminating, pre-harm enforcement – Cutting the AG’s ability to seek civil penalties unless a harm has occurred or there is an imminent threat to public safety.

I tried to open your email templates Google doc linked above and was given the message that I don’t have access rights to view it. Update - I retried the link a bit later and it loaded with no issues.

Huh, it shows me that it's available to anyone with the link. Here's it is again in case that helps: https://docs.google.com/document/d/1HiYMG2oeZO8krcCMEHlfAtHGTWuVDjUZQaPU9HMqb_w/edit?usp=sharing

Curated and popular this week
Relevant opportunities