New & upvoted

Customize feedCustomize feed
75
· · · 19m read

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
12 more
I went to jail yesterday in Wisconsin. I helped rescue 23 beagles in a large mass open rescue against a factory farm, Ridglan Farms, near Madison. We were trying to push the police to act on documented animal cruelty at Ridglan. Instead they arrested me and 26 other activists. I wrote a blog post about why I did it.. Excerpt: More info and stories from Wayne Hsiung: https://blog.simpleheart.org/p/im-in-jail-for-rescuing-dogs-its If you're in the DC area, I'll be sharing more about my experience at Revolutionists' Night, an animal welfare meetup, this Thursday. Reach out for an invite. [Edited to add:] I believe there is a lawful basis for this action and I intend to fight any attempted prosecution in court! I'm not advocating any illegal activity, of course.
In two days (March 21st, 12-4pm), about 140 of us (event link) will be marching on Anthropic, OpenAI and xAI in SF asking the CEOs to make statements on whether they would stop developing new frontier models if every other major lab in the world credibly does the same. This comes after Anthropic removed its commitment to pause development from their RSP. We'll be starting at 500 Howard St, San Francisco (Anthropic's Office, full schedule and more info here). This is shaping to be the biggest US AI Safety protest to date, with a coalition including Nate Soares (MIRI), David Krueger (Evitable), Will Fithian (Berkeley Professor) and folks representing PauseAI, QuitGPT, Humans First.
AI Czar attacks EA. (Again.) Today in this post on X, the U.S. 'AI Czar' David Sacks directly attacked Humans First, an AI safety advocacy organization, by claiming that it's nothing more than 'censorship power play', a shadowy campaign by Effective Altruists to turn the conservative right against the AI industry, and to block technological progress.  He quote-posted this blog by Jordan Schachtel titled 'Built to Deceive: How the Effective Altruist Machine Infiltrated the Conservative Right on AI'. As an AI Safety advocate, a member of Humans First, an Effective Altruist, and a political conservative, I'm angry about this misrepresentation of AI safety campaign. And I think EAs should fight back harder against senior federal officials smearing our movement. Any suggestions on how to respond? I don't have time this week to write a detailed rebuttal, but I'd be happy to link and promote anything that others write.
UC Berkeley EA is hosting a west coast uni student EA retreat on april 10-12, with ~50 attendees from Berkeley, Stanford, UCLA, UCI, UCSD, & more, as well as special guests like Matt Reardon, Jake McKinnon, Jesse Gilbert, Julie Steele, Adam Khoja, Richard Ren, & more... ...but we only know to reach out to people who're involved with their uni's clubs. so: if you're interested in attending, book a 5-10 minute chat with alex or aiden :) some examples of gaps in our outreach: * unis that don't have an EA club * students who haven't joined their uni's EA club * transfers to west-coast unis * students who're on leave from their uni and presently living on the west coast * high-schoolers who'll soon be starting at west coast we won't be able to take everyone, but reading the ea forum is a pretty positive indicator that you'd be a good fit!
We're sadly no longer accepting sign-ups for our founder's programme. We've had an influx of demand and we're now fully at capacity for the foreseeable future. Its funding situation is precarious and I've sadly got to focus on that now. Results are nuts, but mental health funders are focussed on LMICs, meta funders don't like mental health interventions, so it's a challenging category to even survive in. For now, I've got to focus on doing a good job for our existing clients. I'm sorry!