Hide table of contents

Hi!


๐Ÿ“ฐ AI Regulation Progress

๐ŸŒ California's SB 294, the Safety in AI Act, introduced by Senator Scott Wiener on September 13, aims to enhance AI safety. The bill outlines provisions for responsible scaling, liability for safety risks, CalCompute, and KYC policies.

๐Ÿ‡ฌ๐Ÿ‡ง UK's Frontier AI Taskforce unveiled its expert panel, including luminaries like David Krueger and Yoshua Bengio.

๐Ÿ‡ช๐Ÿ‡บ Ursula von der Leyen, in her State of the European Union (SOTEU) speech, mentioned the need for AI regulation, especially safety.

๐Ÿ‡บ๐Ÿ‡ธ Senators Richard Blumenthal and Josh Hawley announced a bipartisan AI regulation framework in the US, introducing AI training licensing.

These mark a profound shift in AI policy in just six months and are a testament to tireless advocacy and visionary policymaking. Unfortunately, none of the current legislative proposals would prevent or delay development of super-intelligent AI:

Policy scorecards by Daniel Colson / AI Policy Institute

๐Ÿ”ฌ USA AI x-risk perception tracker

๐Ÿ“Š The second wave, conducted from August 27 to 28, 2023, showed x-risk perception remained steady, more people in the USA seem to agree with incorrigibility of advanced AI and short AGI timelines:


๐Ÿ“ฃ International #PauseAI protests on 21 October 2023

๐ŸŒ On 21 October, join #PauseAI protests across the globe. From San Francisco to London, Jerusalem to Brussels, and more, we unite to address the rapid rise of AI power. Our message is clear: it's time for leaders to take AI risks seriously.

๐Ÿ—“๏ธ October 21st (Saturday), in multiple countries 
๐Ÿ‡บ๐Ÿ‡ธ US, California, San Francisco (Sign up
๐Ÿ‡ฌ๐Ÿ‡ง UK, Parliament Square, London (Sign up, Facebook
๐Ÿ‡ฎ๐Ÿ‡ฑ Israel, Jerusalem (Sign up
๐Ÿ‡ง๐Ÿ‡ช Belgium, Brussels (Sign up
๐Ÿ‡ณ๐Ÿ‡ฑ Netherlands, Den Haag (Sign up
๐Ÿ‡ฎ๐Ÿ‡น Italy (Sign up
๐Ÿ‡ฉ๐Ÿ‡ช Germany (Sign up
๐ŸŒŽ Your country here? Discuss on Discord!


๐Ÿ“ฃProtest against irreversible proliferation of model weight at Meta HQ

Stand with Holly Elmore for AI safety! Meta's open AI model weights risk our safety!

๐Ÿ—“๏ธ Protest: 29 September 2023, 4:00 PM PDT
๐Ÿ“ Location: 250 Howard St, outside Meta Office Building, San Francisco


๐Ÿ“ƒ Policy updates

On the policy front, we have made our submission to the Canadian Guardrails for Generative AI โ€“ Code of Practice by Innovation, Science and Economic Development Canada.

Next, we are working on the following:

Do you know of other inquiries? Please let us know. You may respond to this email if you want to contribute to the upcoming consultation papers.


๐Ÿ“œPetition updates

๐Ÿ‡ฌ๐Ÿ‡ง For our supporters in the UK, there's an ongoing petition led by Greg Colbourn. This petition urges the global community to consider a worldwide moratorium on AI technology development due to human extinction risks. As of now, the petition has garnered 48 signatures in support of this crucial cause.


Campaign media coverage

The Roy Morgan research into Australians' attitudes regarding AI and x-risk was covered in ACS Information Age, B&T, Cryptopolitan, Startup Daily, Women's Agenda, and mentioned on Sky News.

InDaily (South Australia) wrote about the recent South Australian consultation with focus on the usage of AI tools in the public sector.


Thank you for your support! Please donate to the campaign to help us fund ads in London ahead of the UK AI summit. Please share this email with friends.

Campaign for AI Safety
campaignforaisafety.org

Comments


No comments on this post yet.
Be the first to respond.
More from Jolyn Khoo
32
ยท ยท 2m read
Curated and popular this week
LintzA
 ยท  ยท 15m read
 ยท 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines Thereโ€™s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeyaโ€™s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though havenโ€™t moved dramatically in recent months). Weโ€™ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAIโ€™s o3 model achi
Dr Kassim
 ยท  ยท 4m read
 ยท 
Hey everyone, Iโ€™ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. Iโ€™m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EAโ€™s approach? Iโ€™d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Letโ€™s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. Itโ€™s a top EA cause because itโ€™s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isnโ€™t reaching the people itโ€™s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if thatโ€™s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, weโ€™re losing lives nowโ€”1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ยท  ยท 6m read
 ยท 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I donโ€™t know how many Crossfit-interested low-income teens there are in my small town, but Iโ€™ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, โ€œMy money could fully solve this problemโ€. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but canโ€™t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if Iโ€™m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f