Hide table of contents

Note: This post is about AI alignment and policy

In this brief post, I will cover the main ideas of State Bill 1047, what it means if it passes, the reasons for and against the bill, and the surrounding context. Everything points to this bill being a good thing, and if you find this post compelling, you can find contact info for Governor Newsom at the bottom. The bill is currently awaiting his signature, so whos to say a persuasive letter couldn't push him in the right direction.

So first, the bill itself. If you're already familiar, you can skip this first section. If you’re especially thorough you can read the bill here.

If you've somehow avoided reading about this until now, here are the key aspects:

Definitions:

  • Artificial Intelligence (AI): An engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
  • Covered Models: AI models exceeding specified thresholds in computational power and cost, indicating significant capability.
  • Critical Harm: Severe negative outcomes such as mass casualties, significant property damage ($500 M), or grave threats to public safety facilitated by AI models.

Obligations for Developers of Covered Models:

  • Safety and Security Protocols: Developers must implement written protocols addressing the risks associated with their AI models, including technical and administrative safeguards.
  • Risk Assessments: Before deploying frontier AI models, developers must assess potential risks of causing critical harm and take measures to mitigate them.

Transparency and Reporting:

  • Publish redacted versions of safety protocols and audit reports.
  • Report any AI safety incidents to the Attorney General within 72 hours.
  • Conduct annual third-party audits to ensure compliance with safety requirements.[1]
  • Prohibition of Deployment if Risk Exists: Developers are prohibited from deploying AI models if there is an unreasonable risk of causing critical harm.

Obligations for Operators of Computing Clusters:

  • Customer Verification: Operators must implement policies to know their customers and assess whether they intend to train covered AI models.
  • Record-Keeping: Maintain records of customer activities and have the capability to enact shutdowns if necessary.

Whistleblower Protections:

  • Employees are protected when disclosing information about non-compliance or potential risks.
  • Retaliation against whistleblowers is prohibited.[2]

Establishment of the Board of Frontier Models:

  • A nine-member board within the Government Operations Agency to oversee and update regulations related to advanced AI models.
  • Responsible for updating thresholds for what constitutes a covered model and issuing guidance on preventing critical harms.

Creation of CalCompute:

  • Plans for a public cloud computing platform, CalCompute, to support safe, ethical, and equitable AI research and innovation.
  • Intended to expand access to computational resources, particularly for academic researchers and startups.

Enforcement and Penalties:

  • The Attorney General is empowered to enforce the act and impose civil penalties for violations.
  • Penalties include fines, injunctive relief, and damages.[3]

So, on a high level, if this bill passes, the broad effects are relatively simple: AI progress slows, we assume less risk, and some extra resources are available for the research and open-source community. If it doesn't pass, we maintain the rate of progress we have, and maintain our level of risk. There is serious debate on which is the right move.

With that context, the bill has its proponents and opponents, and among both parties, its critics. Some insightful analysis on the bill have been posted on forums (I like ones from Zvi and Ryan Greenblatt), which provide context, and show how the bill has progressed. Some are a little outdated at this point, but I still recommend reading them, as it provides much needed context. Regardless, the supporters of state bill 1047 and those against fall into two schools of thought. Supporters of the bill see the risks and realize the importance of the technology we have coming. They recognize its world changing potential, for good, and bad, and think we need to start regulating to mitigate risk. The opponents believe the regulatory hindrance is not worth it, and we will benefit more from the faster, cheaper development without it. (This generalization ignores those who do see the risks as significant enough to warrant action, but don't want to deal with the regulations because they work at a company where the new rules will apply, or think the bill poorly addresses its purpose)

So, which side is right?

Well, just like almost anything, there is no definitive right answer. From what I can observe, most people who are well informed and do not have a conflict of interest support the bill (if your experience differs, please say in the comments). Other than that, personally, the level of risk we hold in the absence of major steps toward alignment justifies the measures proposed in State Bill 1047.

AI has been racing forward recently, and many people(including myself) are concerned. Current models and technology are becoming very powerful, and are very susceptible to espionage, which could give some bad actors(think China, North Korea, terrorist groups) these very powerful tools, especially as models advance. The alignment discussion/community has emphasized the dangers of this for a while, and despite this conversation, we don't have any level of security or risk mitigation that can reasonably control or secure the kind of AI system which we hope to create soon. Leopold Aschenbrenner has emphasized this in multiple writings, especially situational awareness, how we are way behind on alignment, and it will likely take significant investment and time to get to a point that can be considered “safe” by any reasonable standards. Also discussed by Aschenbrenner is the funny way that the US will almost casually fall into intense issues, then suddenly take massive action. Now this might be true to a certain extent, and the US might see the dangers and very quickly make an all-out effort to secure superintelligence, but that doesn’t necessarily mean we should all sit back and wait for this to happen. With the level of existential risk posed by super intelligence, and this bill being a leap into the regulation of it, we have an opportunity to bring that threshold for intense action closer to the present, and raise the probability of safe superintelligence, and lower the probability of (worst case) mass destruction. If passed, this bill could have a very beneficial effect in the long run.

All this being said, think for yourself. I am one (not very well-informed) person. If you decide you care about this bill though, the ball is in Gavin Newsom’s court and you can fill out his form, call, or mail/email your thoughts here:

Contact Form: https://www.gov.ca.gov/contact/

Call: (916) 445-2841

Mail: Governor Gavin Newsom

1021 O Street, Suite 9000

Sacramento, CA 95814

Email (this might not be right, I found it online): gavin.newsom@gov.ca.gov

This may sound dramatic, but please do something if you have strong opinions on SB 1047. This bill could have effects which impact all of humanity to an extreme extent, as it could kick off the major alignment efforts of the government. What a time to be alive. Thanks for reading.

P.S. I'm human, feel free to point out any mistakes I made or flaws in my writing. Also I am relatively new here

  1. ^

    Who will conduct these audits? How will they be enforced? If you have info I'd love to hear it.

  2. ^

    The bill also details the need for anonymous reporting systems within the company producing the model

  3. ^

    It will be interesting to see the level of enforcement

14

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
More from Gabe K
Curated and popular this week
Relevant opportunities