Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.
Overview
Background
This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.
This conference has evolved since 2023:
* The 1st conference mainly consisted of philosophers and was a single track lecture/panel.
* The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working.
* This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round.
We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.
Luckily, it seems like it has been working!
This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.
Outcomes
On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
Could EA benefit from having a "bulldog"?
That is, a pugnacious (but scrupulous) public advocate of EA and EA-adjacent ideas. In the EA community currently, who might come closest to being something like EA's bulldog?
More precisely, I'm thinking of a hybrid between, say, Christopher Hitchens and Peter Singer (or perhaps Derek Parfit, for added dryness). A fiery, polemical wit married to a calm, analytical rigor.
A good, non-EA -affiliated example of this style is Alex J. O'Connor, better known as Cosmic Skeptic on YouTube, a student of philosophy at Oxford whose confrontational yet nuanced content on atheism and veganism is rather popular now (a good example is his speech on veganism and animal rights). On his podcast, he's interviewed Peter Singer, and frequently cites Hitchens as an inspiration.
I suspect that many EAs would be skeptical and cautious of this approach, for various reasons. Certain versions of it would appear to cut against certain EA features commonly regarded as virtues: considerateness and cooperation (and their encouragement), epistemic modesty (e.g. focusing heavily on uncertainties), compassion in disagreement, respecting norms of agreeable conduct, etc.
Similarly, it seems to carry reputational risks, including a risk of doing accidental harm to EA's public image. In this sense, it risks being a hard-to-reverse decision, resulting in more costs than benefits (William MacAskill discusses this here). Maybe this is reason enough for an advocate of this kind not to wish to be publicly associated with EA even while supporting and highlighting its cause-areas.
On the other hand, perhaps at least some of this style can attract and/or sustain more positive, public attention than milder outreach approaches, and perhaps even shape public opinion more effectively.
There's much more to say, but this is already much longer than I intended. I'd love to read any thoughts on this, and/or to be pointed in the direction of previous, related discussion.
I feel like the main role of a bulldog is to fend off the fiery, polemical enemies of a movement. Atheism and veganism (and even AI safety, kind of) have clear opponents; I don't think the same is especially true of EA (as a collection of causes).
There are people who argue for localism, or the impracticality of measuring impact, but I can't think of the last time I've seen one of those people have a bad influence on EA. The meat industry wants to kill animals; theists want to promote religion; ineffective charities want to... raise funds? Not as directly opposed to what we're doing.
I suppose we did have the Will MacAskill/Giles Fraser debate at one point, though. MacAskill also took on Peter Buffet in an op-ed column. I don't know how he feels about those efforts in retrospect.
We could certainly use more eloquent/impassioned public speakers on EA topics (assuming they are scrupulous, as you say), but I wouldn't think of them as "bulldogs" -- just regular advocates.
This Letter made me feel like there can be organized opposition from ineffective charities
Thank you Aaron, these are great points!