EA Global NYC will be taking place 16-18 Oct 2026 at the Sheraton Times Square. Applications for NYC, and all 2026 EAGs, are open now!
After the success of last year's event, our first EAG in NYC and our largest US EAG in years, we're excited to return and build on last year. For more information visit our website and contact hello@eaglobal.org with any questions.
EA Global: Bay Area 2026 will be taking place February 13–15 at the Hilton Union Square. Applications will open later this year.
We chose to delay our 2026 announcement to allow us more time to search for the best venue. We're now excited to be announcing our new venue and location and look forward to seeing many people in February!
I'd be doing less good with my life if I hadn't heard of effective altruism
I think I was pretty close to getting stuck in a semi-random corporate career path. I was always sympathetic to EA like ideas. But I think seeing EA's actually 'do the thing' made me think I could and should do the thing too. I remember some feeling of "Well, I think that's 100% the correct approach, and I guess if these people are doing it I have no excuse but to do it too"
We’re excited to officially announce dates for EA Global: London 2026! From May 29–31, we’ll be hosting attendees at the InterContinental London – The O2 again.
Applications will open later this year.
We’re also aiming to confirm EA Global: Bay Area 2026 for February and will share more details as soon as possible.
As we look ahead, I wanted to highlight that EAG London 2025 was our biggest EAG ever! But more than that, it was our highest rated event on record and one of our most cost-effective, based on net cost per attendee.
Thanks to e...
The percentage of EAs earning to give is too low
I'd be excited for more people to consider it as an option - particularly given how funding landscape continues to evolve, and particularly thinking of Earning to Give as 'career to maximise donations' and not 'normal career while donating 10%'
Well done on a great event! I’m really excited about low-cost, small events to help connect the community. We’ve made some progress in bringing down EAG costs since 2023 (EAG London this year will hopefully cost roughly 40% less per attendee as in 2023!), but I’m excited for events like this (and the EA Summits program our CEA colleagues are running) to keep happening and encouraging my team to reflect on our spending!
(I'm the EA Global Program Lead at CEA)
Thanks for your feedback (I lead the EAG team)! We value EAG referrals very highly and are really grateful for anyone who refers someone to us. As discussed in the post, rewards are intended "as small tokens of appreciation, not as financial incentives". We hope they're fun ways to show our appreciation and draw people's attention to the fact that they could be referring people.
We want to make sure we're not trivialising referrals though, and we'll bear this feedback in mind. Are you suggesting it would be better to have no incentive, or a more substantial monetary incentive?
I think there's a nice hidden theme in the EAG Bay Area content, which is about how EA is still important in the age of AI (disclaimer: I lead the EAG team, so I'm biased). It's not just a technical AI safety conference, but it's also not ignoring the importance of AI. Instead, it's showing how the EA framework can help prioritise AI issues, and bring attention to neglected topics.
For example, our sessions on digital minds with Jeff Sebo and the Rethink team, and our fireside chat with Forethought on post-AGI futures, demonstrate how there's important AI r...
EAG Bay Area Application Deadline extended to Feb 9th – apply now!
We've decided to postpone the application deadline by one week from the old deadline of Feb 2nd. We are receiving more applications than in the past two years, and we have a goal of increasing attendance at EAGs which we think this will help. If you've already applied, tell your friends! If you haven't — apply now! Don't leave it till the deadline!
You can find more information on our website.
Hi Niklas, Thanks for your comment. I’m the program lead for EAGs. I’ve put a few of my thoughts below:
Thanks! Yes you're correct that EAG Bay Area this year won't be GCR-focused and will be the same as other EAGs. Briefly, we're dropping the GCR-focus as CEA is aiming to focus on principles-first community building, and because a large majority of attendees last year said they would have attended a non-GCR focused event anyway.
EA Oxford and Cambridge are looking for new full-time organisers!
We’re looking for motivated, self-driven individuals with excellent communication and interpersonal skills, the ability to manage multiple projects, and think deeply about community strategy.
ERA is hiring for an Ops Manager and multiple AI Techincal and Governance Research Managers - Remote or in Cambridge, Part and Full-time, ideally starting in March, apply by Feb 21.
The Existential Risk Alliance (ERA) is hiring for various roles for our flagship Summer Research Programme. This year, we will have a special focus on AI Safety and AI Governance. With the support of our networks, we will host ~30 ERA fellows, and you could be a part of the team making this happen!
Over the past 3 years, we have supported over 60 early career researchers fro...
TL;DR: A 'risky' career “failing” to have an impact doesn’t mean your career has “failed” in the conventional sense, and probably isn’t as bad it intuitively feels.
Thanks for this post! I think I have a different intuition that there are important practical ways where longtermism and x-risk views can come apart. I’m not really thinking about this from an outreach perspective, more from an internal prioritisation view. (Some of these points have been made in other comments also, and the cases I present are probably not as thoroughly argued as they could be).
To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.
ALLFED-type work is likely highly cost effective from the short-term perspective; see global and country...
Thanks for this interesting analysis! Do you have a link to Foster's analysis of MindEase's impact?
How do you think the research on MindEase's impact compares to that of GiveWell's top charities? Based on your description of Hildebrandt's analysis for example, it seems less strong than e.g. the several randomized control trials supporting distributing bed nets. Do you think discounting based on this could substantially effect the cost-effectiveness? (Given how much lower Foster's estimate of impact is though and that this is more heavily used in the overall cost-effectiveness, I would be interested to see whether this has a stronger evidence base?)
Thanks for this post Jack, I found it really useful as I haven't got round yet to reading the updated paper. This break down in the cluelessness section was a new arrangement to me. Does anyone know if this break down has been used elsewhere? If not this seems like useful progress in better defining the cluelessness objections to longtermism.
Thanks very much for your post! I think this a really interesting idea and it's really useful to learn from your experience in this area.
What would you think of the concern that these types of ads would be a "low fidelity" way of spreading EA that could risk misinforming people about EA? I think from my experience community building, it's really useful to be able to describe and discuss EA ideas in detail, and that there are risks to giving someone an incorrect view of EA. These risks include someone being critical of what they believe EA is, ...
Thanks so much for your thoughts Robert!
"What would you think of the concern that these types of ads would be a "low fidelity" way of spreading EA that could risk misinforming people about EA? I think from my experience community building, it's really useful to be able to describe and discuss EA ideas in detail, and that there are risks to giving someone an incorrect view of EA. These risks include someone being critical of what they believe EA is, and spreading this critique, as well as discouraging them from getting involved when they may ha...
I think I would have some worry that if external evaluations of individual grant recipients became common, this could discourage people from applying from grants in future, for fear of being negatively judged should the project not work out.
Potential grant recipients might worry that external evaluators may not have all the information about their project or the grant makers reasoning for awarding the grant. This lack of information could then lead to unfair or incorrect evaluations. This would be more a risk if it becomes common for people to write ...
Thanks for your comment Jack, that's a really great point. I suppose that we would seek to influence AI slightly differently for each reason:
e.g. you could reduce the chance of AI risk by stopping all AI development but then lose the other two benefits, or you could create a practically useful AI but not one that would guide humanity towards an optimal future. That being said I reckon in practice a lot of work to improve the development of AI would hit all 3. Though maybe if you view one reason as much more important than the others then you focus on a specific type of AI work.
Thank you very much for this post, I found it very interesting. I remember reading the original paper and feeling a bit confused by it. It's not too fresh in my mind so I don't feel too able to try to defend it. I appreciate you highlighting how the method they use to estimate f_l is unique and drives their main result.
A range of 0.01 to 1 for fl in your preferred model seems surprisingly high to me, though I don't understand the Lineweaver Davis paper well enough to really comment on its result which I think your range is based on. I think they ment...
Thanks for your comment athowes. I appreciate your point that I could have done more in the post to justify this "binary" of good and optimal.
Though the simulated minds scenario I described seems at first to be pretty much optimal, it could be much larger if you thought it would last for many more years. Given large enough uncertainty about future technology, maybe seeking to identify the optimal future is impossible.
I think your resources, value and efficiency model is really interesting. My intuition is that values is the limiting factor. I can bel...
Thanks again for creating this post Neel. I can confirm I managed to write and publish my post in time!
I think without commiting to writing it here, my post would either have been made a few months later, or perhaps not been published at all.
Thanks for your comment!
I hadn't thought to think about selection effects, thanks for pointing that out. I suppose Bostrom actually describes black balls as technologies that cause catastrophe but doesn't set the bar as high as extinction. Then drawing a black ball doesn't affect future populations drastically, so perhaps selection effects don't apply?
Also, I think in The Precipice Toby Ord makes some inferences for natural extinction risk given the length of time humanity has existed for? Though I may not be remembering correctly. I think the logic was so...
Thanks for this post Akash, I found it really interesting to read. I definitely agree with your point about how friendly EAs can be when you reach out to them. I think this is something I've been aware of for a while, but it still takes me time to internalise and make myself more willing to reach out to people. But it's definitely something I want to push myself to do more, and encourage other people to do. No one is going to be unhappy about someone showing an interest in their work and ideas!
This is a really interesting idea. I think I instinctively have a couple of concerns about such an idea
1) What is the benefit of such statements? Can we expect the opinion of the EA community to really carry much weight beyond relatively niche areas?
2) Can the EA community be sufficiently well defined to collect opinion? It is quite hard to work out who identifies as an EA, not least because some people are unsure themselves. I would worry any attempt to define the EA community too strictly (such as when surveying the community's opinion) could come across as exclusionary and discourage some people from getting involved.
Thanks for your response!
I definitely see your point on the value of information to the future civilisation. The technology required to reach the moon and find the cache is likely quite different to the level required to resurrect humanity from the cache so the information could still be very valuable.
An interesting consideration may be how we value a planet being under human control vs control of this new civilisation. We may think we cannot assume that the new civilisation would be doing valuable things but that a human planet would be quite valuable. ...
Thanks for your comment, I found that paper really interesting and it was definitely an idea I'd not considered before.
My main two questions would be:
1) What is the main value of humanity being resurrected? - We could inherently value the preservation of humanity and it's culture. However, my intuition would be that humanity would be resurrected in small numbers and these humans might not even have very pleasant lives if they're being analysed or experimented on. Furthermore the resurrected humans are likely to have very little agency, being...
Considering evolutionary timelines is definitely very hard because it's such a chaotic process. I don't have too much knowledge about evolutionary history and am hoping to research this more. I think after most human existential events, the complexity of the life that remains would be much greater than that for most of the history of the Earth. So although it took humans 4.6 billion years to evolve "from scratch", it could take significantly less time for intelligent life to re-evolve after an existential event as a lot of the hard evol...
I believe it is the probability that a nuclear war occurs AND leads to human extinction, as described in The Precipice. I think I would agree that if it was the just the probability of nuclear war, this would be too low, and a large reason the number is small is because of the difficulty for a nuclear war to cause human extinction.
Thanks for the elaboration. I haven't given much consideration to "desired dystopias" before and they are really interesting to consider.
Another dystopian scenario to consider could be one in which humanity "strands" itself on Earth through resource depletion. This could also prevent future life from achieving a grand future.
Hi Michael, thanks for this comment!
This is a really good point and something I was briefly aware of when writing but did not take the time to consider fully. I've definitely conflated extinction risk with existential risk. I hope that when restricting everything I said just to extinction risk, the conclusion still holds.
A scenario where humanity establishes it's own dystopia definitely seems comparable to the misaligned AGI scenario. Any "locked-in" totalitarian regime would probably prevent the evolution of other intelligent life. This could cause us to increase the risk posed by such dystopian scenarios and weigh these risks more highly.
Thanks for your comment Matthew. This is definitely an interesting effect which I had not considered. I wonder whether though the absolute AI risk may increase, it would not affect our actions as we would have no way to affect the development of AI by future intelligent life as we would be extinct. The only way I could think of to affect the risk of AI from future life would be to create an aligned AGI ourselves before humanity goes extinct!
Hi Michael, thank you very much for your comment.
I was not aware of some of these posts and will definitely look into them, thanks for sharing! I also eagerly await a compilation of crucial questions for longtermists which sounds very interesting and useful.
I definitely agree that I have not given consideration to what moral views re-evolved life would have. This is definitely a big question. One assumption I may have implicitly used but not discussed is that
"While the probability of intelligent life re-evolving may be somewhat soluble and differ b...
Hi, thanks for your questions!
(1) I definitely agree with P1. For P2, would it not be the case that the risk of extinction of humans is strictly greater than the the risk of extinction of humans and future possible intelligent life as the latter is a conjuction of the former? Perhaps a second premise could instead be
P2 The best approaches for reducing human existential risk are not necessarily the best approaches for reducing existential risk to humans and all future possible intelligent life
With a conclusion
C We should focus on the best methods of preve...
Hi Carl,
Thank you very much for your comment! I agree with your comment on the human extinction risks that 99% is probably not high enough to cause extinction. I think I wanted to provide examples of human extinction event, but should have been more careful on the exact values and situations I described.
On re-evolution after an asteroid impact, my understanding is that although species such as humans eventually evolved after the impact, had humanity existed at the time of the impact it would not have survived as nearly all land mammals over 25kg went ext...
Hi Eevee, thank you for flagging! There was an error in our system. Applications for EAG SF should be open again now and will close on Feb 1 11:59pm PT as listed on the website. Please contact hello@eaglobal.org if you run into any more issues!