Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).
If antinatal advocacy was effective, wouldn't it make sense to pursue on animal welfare grounds? Aren't most new humans extremely net negative?
I have a 3YO so hold fire!
AI Safety Monthly Meetup - Brief Impact Analysis
For the past 8 months, we've (AIS ANZ) been running consistent community meetups across 5 cities (Sydney, Melbourne, Brisbane, Wellington and Canberra). Each meetup averages about 10 attendees with about 50% new participant rate, driven primarily through LinkedIn and email outreach. I estimate we're driving unique AI Safety related connections for around $6.
Volunteer Meetup Coordinators organise the bookings, pay for the Food & Beverage (I reimburse them after the fact) and greet attendees. This initiative would literally be impossible without them.
Key Metrics:
ROI: We're creating unique AI Safety related connections at $6 per connection, with additional network effects as members continue to attend and connect beyond their initial meetup.
I didn't want to read all of @LintzA's post on the "The Game Board has been Flipped" and all 43+ comments, so I copy/pasted the entire webpage into Claude with the following prompt: "Please give me a summary of the authors argument (dot points, explained simply) and then give me a summary of the kinds of support and push back they got (dot points, explained simply, thematised, giving me a sense of the concentration/popularity of themes in the push back)"
Below is the result (the Forum team might want to consider how posts with large numbers of comments can be read quickly):
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Main Arguments:
Common Themes in Response (ordered by apparent prominence in comments):
Strong Agreement/Supporting Points:
Major Points of Disagreement:
Technical/Factual Corrections:
Other Notable Points:
Overall Tone of Reception: The piece appears to have been well-received as a useful overview of recent developments, but with significant pushback on specific strategic recommendations, particularly around working at AI labs and political strategy.
One axis where Capabilities and Safety people pull apart the most, with high consequences is on "asking for forgiveness instead of permission."
1) Safety people need to get out there and start making stuff without their high prestige ally nodding first
2) Capabilities people need to consider more seriously that they're building something many people simply do not want
You don't need EAs Greg - you've got the general public!