Hide table of contents

I am cross posting the below content from the latest newsletter from the campaignforaisafety.org 

I did not write this content but as an advocate for the organisation I fully endorse it.

I'd also like to add that the question of whether we (i.e. anyone) should be doing mass outreach on the topic of AI Safety is over. It is happening. Several initiatives are either set up or being set up.

The question for people reading this is ~how~ do you want to be involved?

Greg outlined several ways you can get involved in his post here. Please check it out.

Anyway, here is the update from Nik Samoylov (founder of campaignforaisafety.org):

Campaign for AI Safety



🤑 First of all, thank you to the donors and paid subscribers. The campaign account now sits at $2,073.24, but of course more is spent per week on running the campaign.

🦜 There is a new Slack: AGI Moratorium HQ. It has 160+ like-minded people doing different things.

My (i.e. Nik's) personal focus this month is on message testing with the goal of creating a handbook of communicating existential risk from AI and calling for a moratorium on AI capability advancement.

✍️ One element of it is testing narratives that can convince people of the need of such moratorium. They will be tested in surveys like this. If you would like to contribute a narrative to testing, please feel free to add them in.

Add your narrative to testing

🙈 Also, you can check out results of survey testing of billboards.

📻 A test radio ad is running now in Cairns, Australia this month on Star 102.7 FM and 4CA 846 AM.


Is it a good ad? Send your feedback! It's not the last one.

So far I observed that it needs to mention AI / artificial intelligence in a few places to accommodate for people who can be just tuning in in the middle of the ad.

👍 Activity of the week is liking and and subscribing to the newly created LinkedIn and Instagram pages.

Thank you for your support! Please share this email with friends.

Nik Samoylov from Campaign for AI Safety


Sorted by Click to highlight new comments since: Today at 11:09 AM

the question of whether we (i.e. anyone) should be doing mass outreach on the topic of AI Safety is over. It is happening.


This feels like a very hostile statement. It's not at all obvious that this question is over. 

I personally feel a lot more cautious about doing mass outreach. I think there's a decent chance people could accidentally do significant harm to future efforts. Policy, politics and advocacy are complicated - regardless of the area you're working in.

For what it's worth, I've spoken to Nik and I think some of the work he's doing is great. I'm especially excited about narrative testing.

Whilst I didn't write that, I do basically feel the same way. Sorry if it comes across as hostile, but we're in a pretty desperate situation. Analysis paralysis here could actually be lethal. What timelines are you envisaging re "future efforts"? I feel like we have a few months to get a Pause in place if we actually want a high (90%) chance of survival. The H100 "summoning portals" are already being built.

The Slack invite only works if you have a @ea-maastricht.org email address. Is there a link for people who don't have that? 


hmmm does this work? https://join.slack.com/t/agi-moratorium-hq/shared_invite/zt-1xg02vzmp-8cFeFm3rw3ZAGX7byHJVMA