Community
Community
Posts about the EA community and projects that focus on the EA community

Quick takes

1
1h
The Straw and the Camel's back I recently had a colleague complain that oat milk was a 'luxury' that the work coffee machines didn't need. And this tiny little comment kind of broke me. I feel like I am so careful not to judge or lecture everyone around me for their insanely massive moral failings around animal welfare, or donating - yet apparently people can't even just let me have my suffering-free milk in peace.  Which prompted me to re-evaluate something I hadn't really thought about in a long time - being EA (or EA-adjacent or however people wanna identify) is just really hard sometimes. I used to be more actively advocatey about things, but it can be exhausting, and at some point I just kinda stopped. But now I feel very motivated to figure out how to start being a lil more vocal again, because it turns out that pretending like I don't have strong opinions on these things is also exhausting! Which is all to get to the point of: there are a lot of posts on here about EA being hard, and how to talk about EA, and reading those posts helps get a feeling of support but knowing this doesn't magically make it all easier. I am just really grateful for this awesome community, and want to just normalise a bit more to share when it gets hard because thats ok. We are doing a hard thing.  (Note: while this one colleague clearly pushed my buttons, further reflection got me very happy that clearly a bunch of other people had been advocating to get the oat milk at LUMC and I'm very happy they exist and that they succeeded)  
15
9d
A quick reminder that applications for EA Global: London 2026 close this Sunday (May 10)! We already have more applications than last year, and this looks set to be our biggest EAG yet (again)! If you've been meaning to apply but haven't gotten around to it, this is your sign. The admissions bar is more accessible than people often assume. If you're working on or seriously exploring a high-impact problem, you should apply. This is the EAG I've been most excited to put together yet. I'd love to see you all there. 📍 InterContinental London, The O2 · 29-31 May 2026 ⏰ Applications close: Sunday, May 10 🔗 Apply here
32
1mo
I am currently the only Fund Manager at the EA Infrastructure Fund... and that needs to change! I work full-time on something else within the Centre for Effective Altruism, and the EAIF needs a dedicated owner who will drive it forwards. I think we're sitting on a big opportunity here. There's so much that the EA movement could achieve, and so much great work that could be enabled by EAIF. Some indicators of promise here: * CEA is growing, but there's only so much that CEA can work on in-house. We need to fund and nurture great work that's happening elsewhere, too! * There are potential new sources of funding that EAIF could tap into; building a strong product here that donors are excited about is essential. * We have a mini roadmap laid out by recent successes within EA Funds. Let me say more on that last one. I've been extremely impressed by what another EA Fund, the Animal Welfare Fund, has achieved over the past year or two, improving it's evaluation quality, it's staffing, and it's available pool of resources. I think the EAIF has the potential for a similar rocketship trajectory; it needs the right person to come in and make that happen. CEA is hiring for a new Head of the EA Infrastructure Fund: full job description and application form here, apply by 4th May. Let me know if you have questions! I can't promise deep engagement with all potential candidates, but I'll help out with key/quick uncertainties if I can! Some additional thoughts from Loic, new Head of EA Funds, here.
123
1y
2
In light of recent discourse on EA adjacency, this seems like a good time to publicly note that I still identify as an effective altruist, not EA adjacent. I am extremely against embezzling people out of billions of dollars of money, and FTX was a good reminder of the importance of "don't do evil things for galaxy brained altruistic reasons". But this has nothing to do with whether or not I endorse the philosophy that "it is correct to try to think about the most effective and leveraged ways to do good and then actually act on them". And there are many people in or influenced by the EA community who I respect and think do good and important work.
5
10d
4
At what level of compute spending will AI Safety research be cut off from being considered effective altruism (if any)? Of course, saving humanity from misaligned AI could be argued to be close to priceless. But how many experiments have a direct theory of change (ToC) of how it's going to mitigate existential risk?  Perhaps a general one is fine at low compute ("it only costs $10 and 'control research' is generally thought to be a good research agenda").   But what about $5,000? What about $10,000? These numbers start to compare to or surpass what organizations like Giving What We Can receive from someone who donates for a whole year. It also starts to compete with saving a human life via programmes like those in GiveWell's top charities.  What about $20,000? $30,000? $50,000?  Over what time frame are we comfortable spending that much money on compute and still considering that money well (effectively) spent? A year? A month? A single experiment?  What kind of discovery is worth $50,000 in AIS research? Should we expect a clear ToC?  I'm very pro AI Safety, but I'm worried about some of the numbers I'm hearing for compute budgets being thrown around (compared to the information gained). I'm wondering - is anyone else is worried about a movement being (famously) concerned with cost effectiveness continuing on this path? Should we encourage more accountability?   
199
3y
6
I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's incubator programme this summer! The summer 2023 incubator round is focused on biosecurity and scalable global health charities and I'm really excited to see what's the best fit for me and hopefully launch a new charity. The ideas that the research team have written up look really exciting and I'm trepidatious about the challenge of being a founder but psyched for getting started. Watch this space! <3 I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+ advising calls I did and feel very privileged I got to talk to so many people and try and help them along their careers! I've learned so much during my time at 80k. And the team at 80k has been wonderful to work with - so thoughtful, committed to working out what is the right thing to do, kind, and fun - I'll for sure be sad to leave them. There are a few main reasons why I'm leaving now: 1. New career challenge - I want to try out something that stretches my skills beyond what I've done before. I think I could be a good fit for being a founder and running something big and complicated and valuable that wouldn't exist without me - I'd like to give it a try sooner rather than later. 2. Post-EA crises stepping away from EA community building a bit - Events over the last few months in EA made me re-evaluate how valuable I think the EA community and EA community building are as well as re-evaluate my personal relationship with EA. I haven't gone to the last few EAGs and switched my work away from doing advising calls for the last few months, while processing all this. I have been somewhat sad that there hasn't been more discussion and changes by now though I have been glad to see more EA leaders share things more recently (e.g. this from Ben Todd). I do still believe there are some really important ideas that EA prioritises but I'm more circumspect about some of the things I think we're not doing as well as we could (
55
10mo
5
I am sure someone has mentioned this before, but… For the longest time, and to a certain extent still, I have found myself deeply blocked from publicly sharing anything that wasn’t significantly original. Whenever I have found an idea existing anywhere, even if it was a footnote on an underrated 5-karma-post, I would be hesitant to write about it, since I thought that I wouldn’t add value to the “marketplace of ideas.” In this abstract concept, the “idea is already out there” - so the job is done, the impact is set in place. I have talked to several people who feel similarly; people with brilliant thoughts and ideas, who proclaim to have “nothing original to write about” and therefore refrain from writing. I have come to realize that some of the most worldview-shaping and actionable content I have read and seen was not the presentation of a uniquely original idea, but often a better-presented, better-connected, or even just better-timed presentation of existing ideas. I now think of idea-sharing as a much more concrete, but messy contributor to impact, one that requires the right people to read the right content in the right way at the right time; maybe even often enough, sometimes even from the right person on the right platform, etc. All of that to say, the impact of your idea-sharing goes much beyond the originality of your idea. If you have talked to several cool people in your network about something and they found it interesting and valuable to hear, consider publishing it! Relatedly, there are many more reasons to write other than sharing original ideas and saving the world :)
55
1y
1. If you have social capital, identify as an EA. 2. Stop saying Effective Altruism is "weird", "cringe" and full of problems - so often And yes, "weird" has negative connotations to most people. Self flagellation once helped highlight areas needing improvement. Now overcorrection has created hesitation among responsible, cautious, and credible people who might otherwise publicly identify as effective altruists. As a result, the label increasingly belongs to those willing to accept high reputational risks or use it opportunistically, weakening the movement’s overall credibility. If you're aligned with EA’s core principles, thoughtful in your actions, and have no significant reputational risks, then identifying openly as an EA is especially important. Normalising the term matters. When credible and responsible people embrace the label, they anchor it positively and prevent misuse. Offline I was early to criticise Effective Altruism’s branding and messaging. Admittedly, the name itself is imperfect. Yet at this point, it is established and carries public recognition. We can't discard it without losing valuable continuity and trust. If you genuinely believe in the core ideas and engage thoughtfully with EA’s work, openly identifying yourself as an effective altruist is a logical next step. Specifically, if you already have a strong public image, align privately with EA values, and have no significant hidden issues, then you're precisely the person who should step forward and put skin in the game. Quiet alignment isn’t enough. The movement’s strength and reputation depend on credible voices publicly standing behind it.
Load more (8/243)

Posts in this space are about

CommunityEffective altruism lifestyle