EAGx and Summit events are coming up, and we're looking for organizers for more!
Applications for EAGxCDMX (Mexico City, 20–22 March), EAGxNordics (Stockholm, 24–26 April), and EAGxDC (Washington DC, 2–3 May) are all open! These will be the largest regional-focused events in their respective areas, and are aimed at serving those already engaged with EA or doing related professional work. EAGx events are networking-focused conferences designed to foster strong connections within their regional communities.
If you’d like to apply to join the organizing team for a 2026 Bay Area EAGx (date and venue to be confirmed, targeting August–September), please apply via this form. Full details can be found here.
We also have applications or direct registrations open for EA Summits in Helsinki (28 Feb), Hong Kong (7 March), and Jakarta (19 April), with more to be announced soon. Summits welcome existing EA community members but they also include more introductory content, making them a great way for newer, EA-curious professionals to learn about EA and explore potential opportunities.. Please keep them in mind to recommend to friends and colleagues who you think could benefit from in-person exposure to EA ideas and the real people behind them.
If you are interested in hosting an EAGx or Summit in your city, or want to nominate an area for consideration, please fill out this form!
Why don’t EA chapters exist at very prestigious high schools (e.g., Stuyvesant, Exeter, etc.)?
It seems like a relatively low-cost intervention (especially compared to something like Atlas), and these schools produce unusually strong outcomes. There’s also probably less competition than at universities for building genuinely high-quality intellectual clubs (this could totally be wrong).
As a university organizer at a very STEM focused state school, I suspect that students getting liberal arts degrees are more easily convinced to pursue a career in direct work. If this is the case, it could be because direct work compares more favorably with the other career options of those with liberal arts degrees, or because the clearer career outcomes of STEM majors create more path dependence and friction when they consider switching careers. This is potentially another thing to keep in mind when trying to compare the successes of EA uni groups.
Dwarkesh (of the famed podcast) recently posted a call for new guest scouts. Given how influential his podcast is likely to be in shaping discourse around transformative AI (among other important things), this seems worth flagging and applying for (at least, for students or early career researchers in bio, AI, history, econ, math, physics, AI that have a few extra hours a week).
The role is remote, pays ~$100/hour, and expects ~5–10 hours/week. He’s looking for people who are deeply plugged into a field (e.g. grad students, postdocs, or practitioners) with high taste. Beyond scouting guests, the role also involves helping assemble curricula so he can rapidly get up to speed before interviews.
More details are in the blog post; link to apply (due Jan 23 at 11:59pm PST).
Not sure who needs to hear this, but Hank Green has published two very good videos about AI safety this week: an interview with Nate Soares and a SciShow explainer on AI safety and superintelligence.
Incidentally, he appears to have also come up with the ITN framework from first principles (h/t @Mjreard).
Hopefully this is auspicious for things to come?
After hanging out with the local Moral Ambition group (sadly there's only one in Malmö), I've found a shorthand to exprss the difference in methodology compared to EA. Both movements aim to find people who aready have the "A," and cultivate the other component in them.
Many effective altruism communities target people who already wish to help the world (Altruism), then guide and encourage them to reach further (be more Effective).
Moral Ambition meanwhile targets high achieving professionals and Ivy Leaguers (Ambition), then remind them that the world is burning and they should help put out the fire (be more Moral).
Hey y'all,
My TikTok algorithm recently presented me with this video about effective altruism, with over 100k likes and (TikTok claims) almost 1 million views. This isn't a ridiculous amount, but it's a pretty broad audience to reach with one video, and it's not a particularly kind framing to EA. As far as criticisms go, it's not the worst, it starts with Peter Singer's thought experiment and it takes the moral imperative seriously as a concept, but it also frames several EA and EA-adjacent activities negatively, saying EA quote "has an enormously well funded branch ... that is spending millions on hosting AI safety conferences."
I think there's a lot to take from it. The first is in relation to @Bella's argument recently that EA should be doing more to actively define itself. This is what happens when it doesn't. Because EA is legitimately an interesting topic to learn about because it asks an interesting question. That's what I assume drew many of us here to begin with. It's interesting enough that when outsiders make videos like this, even when they're not the picture that'd we'd prefer,[1] they will capture the attention of many. This video is a significant impression, but it's not the end-all-be-all, and we should seek to define ourself lest we be defined by videos like it.
The second is about zero-sum attitudes and leftism's relation to EA. In the comments, many views like this were presented:
@LennoxJohnson really thoughtfully grappled with this a few months ago, when he talked about how his journey from a zero-sum form of leftism and the need for structural change towards becoming more sympathetic to the orthodox EA approach happened. But I don't think we can necessarily depend on similar reckonings happening to everyone, all at the same time. With this, I think there's a much less clear solution than the PR problem, as I think on the one hand that EA sometimes doesn't grapple enough with systemic change, but on the other hand that society would be
Oscar Wilde once wrote that "people nowadays know the price of everything and the value of nothing." I can see a particular type of uncharitable EA-critic say the same about our movement, grossed out by how we try to put a price tag on human (or animal) lives. This is wrong.
What they should be appalled by is if a life was truly worth $3500, but that's not what we are claiming. The claim is that a life is invaluable. The world just happens to be in such a way that we can buy this incredibly precious thing for the meager cost of a few thousand.
Nate Sores has written about this in greater detail here.