Building effective altruism
Building EA
Growing, shaping, or otherwise improving effective altruism as a practical and intellectual project

Quick takes

8
3d
2
The Forum should normalize public red-teaming for people considering new jobs, roles, or project ideas. If someone is seriously thinking about a position, they should feel comfortable posting the key info — org, scope, uncertainties, concerns, arguments for — and explicitly inviting others to stress-test the decision. Some of the best red-teaming I’ve gotten hasn’t come from my closest collaborators (whose takes I can often predict), but from semi-random thoughtful EAs who notice failure modes I wouldn’t have caught alone (or people think pretty differently so can instantly spot things that would have taken me longer to figure out). Right now, a lot of this only happens at EAGs or in private docs, which feels like an information bottleneck. If many thoughtful EAs are already reading the Forum, why not use it as a default venue for structured red-teaming? Public red-teaming could: * reduce unilateralist mistakes, * prevent coordination failures (I’ve almost spent serious time on things multiple people were already doing — reinventing the wheel is common and costly), Obviously there are tradeoffs — confidentiality, social risk, signaling concerns — but I’d be excited to see norms shift toward “post early, get red-teamed, iterate publicly,” rather than waiting for a handful of coffee chats.
24
12d
5
Why don’t EA chapters exist at very prestigious high schools (e.g., Stuyvesant, Exeter, etc.)? It seems like a relatively low-cost intervention (especially compared to something like Atlas), and these schools produce unusually strong outcomes. There’s also probably less competition than at universities for building genuinely high-quality intellectual clubs (this could totally be wrong).
34
1mo
1
Dwarkesh (of the famed podcast) recently posted a call for new guest scouts. Given how influential his podcast is likely to be in shaping discourse around transformative AI (among other important things), this seems worth flagging and applying for (at least, for students or early career researchers in bio, AI, history, econ, math, physics, AI that have a few extra hours a week). The role is remote, pays ~$100/hour, and expects ~5–10 hours/week. He’s looking for people who are deeply plugged into a field (e.g. grad students, postdocs, or practitioners) with high taste. Beyond scouting guests, the role also involves helping assemble curricula so he can rapidly get up to speed before interviews. More details are in the blog post; link to apply (due Jan 23 at 11:59pm PST).
11
11d
EAGx and Summit events are coming up, and we're looking for organizers for more! Applications for EAGxCDMX (Mexico City, 20–22 March), EAGxNordics (Stockholm, 24–26 April), and EAGxDC (Washington DC, 2–3 May) are all open! These will be the largest regional-focused events in their respective areas, and are aimed at serving those already engaged with EA or doing related professional work. EAGx events are networking-focused conferences designed to foster strong connections within their regional communities. If you’d like to apply to join the organizing team for a 2026 Bay Area EAGx (date and venue to be confirmed, targeting August–September), please apply via this form. Full details can be found here. We also have applications or direct registrations open for EA Summits in Helsinki (28 Feb), Hong Kong (7 March), and Jakarta (19 April), with more to be announced soon. Summits welcome existing EA community members but they also include more introductory content, making them a great way for newer, EA-curious professionals to learn about EA and explore potential opportunities.. Please keep them in mind to recommend to friends and colleagues who you think could benefit from in-person exposure to EA ideas and the real people behind them. If you are interested in hosting an EAGx or Summit in your city, or want to nominate an area for consideration, please fill out this form!
45
4mo
5
Not sure who needs to hear this, but Hank Green has published two very good videos about AI safety this week: an interview with Nate Soares and a SciShow explainer on AI safety and superintelligence. Incidentally, he appears to have also come up with the ITN framework from first principles (h/t @Mjreard). Hopefully this is auspicious for things to come?
24
2mo
14
Hey y'all, My TikTok algorithm recently presented me with this video about effective altruism, with over 100k likes and (TikTok claims) almost 1 million views. This isn't a ridiculous amount, but it's a pretty broad audience to reach with one video, and it's not a particularly kind framing to EA. As far as criticisms go, it's not the worst, it starts with Peter Singer's thought experiment and it takes the moral imperative seriously as a concept, but it also frames several EA and EA-adjacent activities negatively, saying EA quote "has an enormously well funded branch ... that is spending millions on hosting AI safety conferences." I  think there's a lot to take from it. The first is in relation to @Bella's argument recently that EA should be doing more to actively define itself. This is what happens when it doesn't. Because EA is legitimately an interesting topic to learn about because it asks an interesting question. That's what I assume drew many of us here to begin with. It's interesting enough that when outsiders make videos like this, even when they're not the picture that'd we'd prefer,[1] they will capture the attention of many. This video is a significant impression, but it's not the end-all-be-all, and we should seek to define ourself lest we be defined by videos like it. The second is about zero-sum attitudes and leftism's relation to EA. In the comments, many views like this were presented: @LennoxJohnson really thoughtfully grappled with this a few months ago, when he talked about how his journey from a zero-sum form of leftism and the need for structural change towards becoming more sympathetic to the orthodox EA approach happened. But I don't think we can necessarily depend on similar reckonings happening to everyone, all at the same time. With this, I think there's a much less clear solution than the PR problem, as I think on the one hand that EA sometimes doesn't grapple enough with systemic change, but on the other hand that society would be
6
14d
As a university organizer at a very STEM focused state school, I suspect that students getting liberal arts degrees are more easily convinced to pursue a career in direct work. If this is the case, it could be because direct work compares more favorably with the other career options of those with liberal arts degrees, or because the clearer career outcomes of STEM majors create more path dependence and friction when they consider switching careers. This is potentially another thing to keep in mind when trying to compare the successes of EA uni groups.
15
1mo
After hanging out with the local Moral Ambition group (sadly there's only one in Malmö), I've found a shorthand to exprss the difference in methodology compared to EA. Both movements aim to find people who aready have the "A," and cultivate the other component in them. Many effective altruism communities target people who already wish to help the world (Altruism), then guide and encourage them to reach further (be more Effective). Moral Ambition meanwhile targets high achieving professionals and Ivy Leaguers (Ambition), then remind them that the world is burning and they should help put out the fire (be more Moral).
Load more (8/166)