New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Kelsey Piper’s article on SB 1047 says > This is one of the questions animating the current raging discourse in tech over California’s SB 1047, newly passed legislation that mandates safety training for that companies that spend more than $100 million on training a “frontier model” in AI — like the in-progress GPT-5. Otherwise, they would be liable if their AI system leads to a “mass casualty event” or more than $500 million in damages in a single incident or set of closely linked incidents. I’ve seen similar statements elsewhere too. But after I spent some time today reading through the bill, this seems to be wrong? Liability for developers doesn’t seem to be dependent on whether “critical harm” is actually done. Instead, if the developer fails to take reasonable care to prevent critical harm (or some other violation), even if there is no critical harm done, violations that cause death/bodily harm/etc can lead to fines of 10% or 30% of compute. Here’s the relevant section from the bill: > (a) The Attorney General may bring a civil action for a violation of this chapter and to recover all of the following: > > (1) For a violation that causes death or bodily harm to another human, harm to property, theft or misappropriation of property, or that constitutes an imminent risk or threat to public safety that occurs on or after January 1, 2026, a civil penalty in an amount not exceeding 10 percent of the cost of the quantity of computing power used to train the covered model to be calculated using average market prices of cloud compute at the time of training for a first violation and in an amount not exceeding 30 percent of that value for any subsequent violation. Has there been discussion about this somewhere else already? Is the Vox article wrong or am I misunderstanding the bill?
Happy Ozone Day! The Montreal Protocol, a universally ratified treaty phasing out the use of ozone-destroying CFCs, was signed 37 years ago today. It remains one of the greatest examples of international cooperation to date.
It seems like some of the biggest proponents of SB 1047 are Hollywood actors & writers (ex. Mark Ruffalo)—you might remember them from last year’s strike. I think that the AI Safety movement has a big opportunity to partner with organised labour the way the animal welfare side of EA partnered with vegans. These are massive organisations with a lot of weight and mainstream power if we can find ways to work with them; it’s a big shortcut to building serious groundswell rather than going it alone. See also Yanni’s work with voice actors in Australia—more of this!