We may be running multiple smaller cohorts rather than one big one, if that's what maximizes the ability of strong candidates to participate.
The single most important factor in deciding the timing is the window in which strong candidates are available, and the target size for the cohort is small enough (5-20 depending on strength of applicants) that the availability of a single applicant is enough to sway the decision. It's specifically cases like yours that we're intending to accommodate. Please apply!
A small update on each of these project ideas, for the end of 2025:
That makes sense, I don't want to be overly fussy if it was getting most things right. I guess the thing is, it's not helpful if it mostly recognizes true facts as true but mistakes some true facts as false, if it does not accurately flag a significant number of incorrect facts, which in clicking through a bunch of flags I didn't see almost any I thought necessitated an edit.
I saw so many people who wanted a “job in EA”. They wanted to directly do the good. Have they really thought through the bitter truth? Why do you believe you are uniquely good at an EA job, why ignore the simple premise of earning to give?
I think there's a large number of EAs who earn to give and spend their time focusing on their career rather than spending time reading another 5,000 word forum article on shrimp or going to EA meetups. This is probably the right move if the goal is to earn as much as possible.
People who want "EA jobs" are more likely to be involved in the forum and in community events.
I'm looking now at the Fact Check. It did verify most of the claims it investigated on your post as correct, but not all (almost no posts get all, especially as the error rate is significant).
It seems like with chickens/shrimp it got a bit confused by numbers killed vs. numbers alive at any one time or something.
In the case of ICAWs, it looked like it did a short search via Perplexity, and didn't find anything interesting. The official sources claim they don't use aggressive tactics, but a smart agent would have realized it needed to search more. I think to get this one right would have involved a few more searches - meaning increased costs. There's definitely some tinkering/improvements to do here.
Thanks for the feedback!
I did a quick look at this. I largely agree there were some incorrect checks.
It seems like these specific issues were mostly from the Fallacy Check? That one is definitely too aggressive (in addition to having limited context), I'll work on tuning it down. Note that you can choose which evaluators to run on each post, so going forward you might want to just skip that one at this point.
Interesting idea.
As we switch to wind/solar, you can get the same energy services with less primary energy, something like a factor of 2.
We’re a factor ~500 too small to be type I.
- Today: 0.3 VPP
- Type I: 40 VPP
But 40 is only ~130X 0.3.
There is some related discussion here about distribution.
Incidentally, ‘flipping non-EA jobs into EA jobs’ and ‘creating EA jobs’ both seem much more impactful than ‘taking EA jobs’. That could be e.g. taking an academic position that otherwise wouldn’t have been doing much and using it to do awesome research / outreach that others can build on, or starting an EA-aligned org with funding from non-EA sources, like VCs.
(excerpt from https://lydianottingham.substack.com/p/a-rapid-response-to-celeste-re-e2g)
Some good news since this post was written a few years ago: the usage of cleaner fish in Norway has declined from a peak of 60 million in 2019 to 24 million in 2024.[1] From what I've read this seems to be due to both pressure from the media and Norwegian authorities[2] and also growing use of other methods like laser delousing, which showed positive results in a recent study.[3] (I wasn't able to tell how important each factor was in causing this decline.)
More good news is that the country with the second largest salmon industry, Chile, has...
I love this idea! I just took it for a spin and the quality of the feedback isn't at a point I would find it very useful yet. My sense is that it's limited by the quality of the agents rather than anything about the design of the app, though maybe changes in the scaffold could help.
Most of the critiques were myopic, such as:
[sorry I’m late to this thread]
@William_MacAskill, I’m curious which (if any) of the following is your position?
1.
“I agree with Wei that an approach of ‘point AI towards these problems’ and ‘listen to the AI-results that are being produced’ has a real (>10%? >50%?) chance of ending in moral catastrophe (because ‘aligned’ AIs will end up (unintentionally) corrupting human values or otherwise leading us into incorrect conclusions).
And if we were living in a sane world, then we’d pause AI development for decades, alongside probably engagi...
Wanted to bring this comment thread out to ask if there's a good list of AI safety papers/blog posts/urls anywhere for this?
(I think local digital storage in many locations probably makes more sense than paper but also why not both)
Perhaps the main downside is people may overuse the feature and it encourages people to spend time making small comments, whereas the current system nudges people towards leaving fewer more substantive comments and less nit-picky ones? Not sure if this has been an issue on LW, I don't read it as much.
Executive summary: The author argues that donating to the Berkeley Genomics Project is justified because accelerating safe, beneficial reprogenetics could substantially reduce disease, amplify human intelligence, and lower AI existential risk, and the project targets neglected medium-term technical and social gaps with early field-building traction despite high uncertainty.
Key points:
Executive summary: Drawing on personal experience as a London-based Research Manager at MATS in 2025, the author reflects on research management as a generalist, service-oriented role combining scholar support, mentoring enablement, people management, and internal projects, concluding that it is highly rewarding and impactful despite trade-offs that ultimately motivated a transition to AISI.
Key points:
Executive summary: This report maps the current landscape of AI innovation in aquaculture, finding that commercially available AI tools are already widespread, concentrated in stock and growth management for high-value species like salmon and shrimp, and likely to become increasingly embedded in farm operations despite unclear implications for animal welfare.
Key points:
Executive summary: The post argues, confidently and polemically, that earning to give is an underrated and often superior way for most people to do good, because large, sustained donations typically outweigh the impact of personal lifestyle changes or pursuing “sexy” direct-impact jobs.
Key points:
I'd say it's a medium-sized deal. Academics can often propose ideas and show that they work on smaller (eg 7b) models. However, then it requires someone with a larger compute budget to like the idea & results and implement it at a larger scale.
There are some areas where access to compute is less important like MechInterp, RedTeaming, creating Benchmarks or more theoretical areas of AI research. Areas are more amenable to academic research if they don't require training Frontier models. Eg inference or small fine-tuning runs on Frontier Models are actua...
Question: Has anyone here applied to a role they found on the EA Opportunities board? (Attaching an image of what it used to look like + what it looks like now).
I’m curious if you ended up getting the role or not, and would really appreciate hearing either way. I’m trying to get a sense of how many applications and placements the board is leading to. Happy to DM if that’s easier!
Love this!
I'm a big proponent using love of humanity as a motivator. It's true that guilt and/or rationalism can be motivating, but I've found that helping people because people are what make your life worth living seems like a much healthier way, and is even more motivating (more effective).
You nail the sentiment in your post on Life in a Day. Of course it would be nice if our biological evolution led us to being highly motivated by numbers on a spreadsheet, but operating on the hardware we have, the feeling Life in a Day gives is massively more motivatin...
This is a good take. 80k are good at it, bluedot too, and GWWC have started doing good things as well.
I think national orgs like EA Netherlands are well-positioned to do more, but we're only just waking up to this and are learning how best to allocate a portion of the EUR 30-40k in unrestricted funding we get from CEA. At EAN we've started working with Amplify and a marketing agency and have had great results (3x'd our intro programme completions and increased our EAGx attendance by 35%). Would like to do more of this in the future if we can find the money/re-allocate more of our funds.
Wrapped sort of feels like a roundabout way to give myself a compliment lol.
I didn't know who my most-read authors would be though - thanks for all the great posts @Vasco Grilo🔸, @Bentham's Bulldog, @Lizka !
I'm also a top 1% @Lizka reader, in part because I read the Forum norms doc so often. Lizka's great work on the Forum is still paying dividends - nice one!
...Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. The most common language to refer to this cluster of views was just to say something like ‘people interested in x-risk reduction’. There are a few reasons why this terminology isn’t ideal [...]
For these reasons, and with Toby Ord’s in-progress book on existential risk providing urgency, Toby and Joe Carlsmith started leading discussions about whether there were better terms to use. In October 2017, I pro
EA Global NYC will be taking place 16-18 Oct 2026 at the Sheraton Times Square. Applications for NYC, and all 2026 EAGs, are open now!
After the success of last year's event, our first EAG in NYC and our largest US EAG in years, we're excited to return and build on last year. For more information visit our website and contact hello@eaglobal.org with any questions.
Just to clarify, is the 8-week period for all participants? And if so, will you still accept some applications after the date has be decided?
I might apply but I could only participate if the program was organized in July-August. But given that it could occur any time between February and August, I probably won't apply since it's only like 1/7th chance it will start in July.
Thanks for sharing this. it honestly makes me a bit sad to read, but in a thoughtful way. I still want to hope there is room to influence this over time, even if it’s slow and uneven.
I really appreciate that you found a way to keep having impact “through the side door,” and to stay engaged with the community rather than fully disengaging. That feels important.
I’d genuinely love to connect, compare notes, and trade ideas or intros 🙏
Agree entirely (and I have MORE doubts, even as i have been a vegan for almost 30 years).
It is indeed a very narrow and demanding identity (and even more so when other progressive issues are presented as part and parcel of the vegan lifestyle).
It's noteworthy that the founders of the vegan society in the 1940's welcome everyone who was looking in the same direction, even if they weren't "practitioners".
Strong agree. I think some of that resistance comes from past comms “dramas” — for example around earning to give. It was pushed quite hard at one point, and that ended up shaping the public perception as if that’s the EA message, which understandably made people more cautious afterward.
At the same time, I find it interesting that initiatives like School for Moral Ambition are now communicating very similar underlying ideas, but in a way that feels much more accessible to “normal” people — and they haven’t faced anything like the same backlash.
To me that suggests it’s not that these ideas can’t be communicated broadly, but that how we frame and translate them really matters.
Simón, gracias por abrir esta conversación tan necesaria.
Participo en EA Madrid y estoy estudiando con BlueDot Impact, y una de las primeras barreras que vi fue exactamente esta: la ausencia casi total de recursos en español para quienes quieren profundizar más allá de lo básico.
Coincido completamente en que no se trata solo de traducir, sino de generar contenido original que dialogue con las realidades de nuestros contextos. Cuando intento explicar AI safety o cost-effectiveness a colegas en Madrid o a mi red en Colombia, constantemente me encuentro tradu...
This resonates deeply, especially the line: "Organizations without clear stories hit friction, even when doing excellent work."
I've seen this in my own career transition into EA, I had the skills and the commitment, but until I could articulate why my background in international partnerships and data operations connected to AI safety and global health work, I struggled to make others see the fit.
Your framework around Mission → ToC → OKRs → KPIs → Team is brilliant because it shows that organizational storytelling isn't just "marketing" – it's strategic cla...
Agreed.
One data point: in the recent EA community retreat I organized for 65 people in France in 2025 (not a "premium" retreat), the cost per participant was 156€. This includes my time as well as financial support from participants.
I tend to see these types of events as complementary. I think we should not treat their various outcomes as fungible. You get results of different, non-tradeable kinds. In particular:
I'm not sure exactly, but ALLFED and GCRI have had to shrink, and ORCG, Good Ancestors, Global Shield, EA Hotel, Institute for Law & AI (name change from Legal Priorities Project), etc have had to pivot to approximately all AI work. SFF is now almost all AI.
I hope that moral progress on animal rights/animal welfare will take much less than 1,000 years to achieve a transformative change, but I empathize with your disheartened feeling about how slow progress has been. Something taking centuries to happen is slow by human (or animal) standards but relatively fast within the timescales that longtermism often thinks about.
The only intervention discussed in relation to the far future at that first link is existential risk mitigation, which indeed has been a topic discussed within the EA community for a long time. My point is that if such discussions were happening as early as 2013 and, indeed, even earlier than that, and even before effective altruism existed, then that part of longtermism is not a new idea. (And none of the longtermist interventions that have been proposed other than those relating to existential risk are novel, realistic, important, and genuinely motivated by longtermism.) Whether people care if longtermism is a new idea or not is, I guess, another matter.
Super cool - a bit hectic and I substantively disagree with one of the "fallacies" the fallacy evaluator flagged on this post but I'll definitely be using this going forward
Thanks for the highlight! Yeah I would love better infrastructure for trying to really figure out what the best uses of money are. I don't think it has to be as formal/quantitative as GiveWell. To quote myself from a recent comment (bolding added)
...At some level, implicitly ranking charities [eg by donating to one and not another] is kind of an insane thing for an individual to do - not in an anti-EA way (you can do way better than vibes/guessing randomly) but in a "there must be better mechanisms/institutions for outsourcing donation advice than GiveWell an
I agree with your first paragraph (and I think we probably agree on a lot!), but in your second paragraph, you link to a Nick Bostrom paper from 2003, which is 14 years before the term "longtermism" was coined.
I think, independently from anything to do with the term "longtermism", there is plenty you could criticize in Bostrom's work, such as being overly complicated or outlandish, despite there being a core of truth in there somewhere.
But that's a point about Bostrom's work that long predates the term "longtermism", not a point about whether coining and promoting that term was a good idea or not.
I think the fact that the term didn't add anything new is very bad because it came with a great cost. When you create a new set of jargon for an old idea you look naive and self-important. The EA community could have simply used framing that people already agreed with, instead they created a new term and field that we had to sell people on.
Discussions of "the loss of potential human lives in our own galactic supercluster is at least ~1046 per century of delayed colonization" were elaborate and off-putting, when their only conclusions were the same old obvi...
My biggest takeaway from the comments so far is that many/most of the commenters don't care whether longtermism is a novel idea, or at least care about that much less than I do. I never really thought about that before — I never really thought that would be the response.
I guess it's fine to not care about that. The novelty (or lack thereof) of longtermism matters to me because it sure seems like a lot of people in EA have been talking and acting like it's a novel idea. I care about "truth in advertising" even as I also care about whether something is a goo...
While I find much of this post to be plausible, I’m not sure Ollie’s post supports your conclusions.
Ollie’s post is evaluating a set of retreats which averaged a cost of $1,500 per person. As commenters on the post noted, this seems very high. (I recall reading that low end EAG costs are around the same spot.) For the one retreat I’m aware of, costs were 6-7x less. (This doesn’t include CEA staff costs, but those shouldn’t be able to make up the gap.)
Additionally, you write about how retreats might have lower outcomes due a lack of scale. While I’m s...
One of the more excellent comments I've ever read on the EA Forum. Perceptive and nimbly expressed. Thank you.
people 100 years ago that did boring things focused on the current world did more for us than people dreaming of post-work utopias.
Very well said!
To that extent, the focus on x-risk seems quite reasonable: still existing is something we actually can reasonably believe will be valued by humans in a million years time
I totally agree. To be clear, I support mitigation of existential risks, global catastrophic risks, and all sorts of low-probab...
Wow, this makes me feel old, haha! (Feeling old feels much better than I thought it would. It's good to be alive.)
There was a lot of scholarship on existential risks and global catastrophic risks going back to the 2000s. There was Nick Bostrom and the Future of Humanity Institute at Oxford, the Global Catastrophic Risks Conference (e.g. I love this talk from the 2008 conference), the Global Catastrophic Risks anthology published in 2008, and so on. So, existential risk/global catastrophic risk was an idea about which there had already been a lot of study e...
I mentioned that you often see journalists or other people not intimately acquainted with effective altruism conflate ideas like longtermism and transhumanism (or related ideas about futuristic technologies). This is a forgivable mistake because people in effective altruism often conflate them too.
If you think superhuman AGI is 90% likely within 30 years, or whatever, then obviously that will impact everyone alive on Earth today who is lucky (or unlucky) enough to live until it arrives, plus all the children who will be born between now and then. Longtermi...
I'm not especially familiar with the history - I came to EA after the term "longtermism" was coined so that's just always been the vocabulary for me. But you seem to be equating an idea being chronologically old with it already being well studied and explored and the low hanging fruit having been picked. You seem to think that old -> not neglected. And that does not follow. I don't know how old the idea of longtermism is. I don't particularly care. It is certainly older than the word. But it does seem to be pretty much completely neglected outside EA, as well as important and, at least with regard to x-risks, tractable. That makes it an important EA cause area.
Whether society ends up spending, in the end, more money on asteroid defense or, possibly, more money on monitoring large volcanoes, is orders of magnitude more important than whether people in the EA community (or outside of it) understand the intellectual lineage of these ideas and how novel or non-novel they are. I don't know if that's exactly what you were saying, but I'm happy to concede that point anyway.
To be clear, NASA's NEO Surveyor mission is one of the things I'm most excited about in the world. It makes me feel so happy thinking about it. And ...
I agree that the scholarship of Bostrom and others starting in the 2000s on existential risk and global catastrophic risk, particularly taking into account the moral value of the far future, does seem novel, and does also seem actionable and important, in that it might, for example, make us re-do a back-of-the-envelope calculation on the expected value of money spent on asteroid defense and motivate us to spend 2x more (or something like that).
As someone who was paying attention to this scholarship long before anyone was talking about "longtermism", I was ...
If you're saying that longtermism is not a novel idea, then I think we might agree.
Everything is relative to expectations. I tried to make that clear in the post, but let me try again. I think if something is pitched as a new idea, then it should be a new idea. If it's not a new idea, that should be made more clear. The kind of talk and activity I've observed around "longtermism" is incongruent with the notion that it's an idea that's at least decades and quite possibly many centuries old, about which much, if not most, if not all, the low-hanging fruit ha...
Yeah you literally wrote:
"Under my Christian worldview, nothing I have is really 'mine' anyway, and part of being a good human is to pass on what I've been handed, and even better multiply it if possible."
I think how I see it feels a bit different because I see money more as tool to use than as a resource to share. I think it should be used to help improve the lives of others, but it does importantly feel that it's my responsibility that mine gets used that way. Not sure if that makes sense.
Even to those otherwise sympathetic to SFE, its orientation toward subtraction can be demotivating.
It would not be wrong to assert that the entire process of civilization consists of controlling innate human aggression and, therefore, that all moral efforts to ultimately improve society have a subtractive structure: do not aggress, do not harm, do not tolerate suffering.
Compassionate religious philosophies have thus attempted to develop "positive" abstract concepts capable of emotionally engaging the believer in an ideology of altruism and benevolenc...
Yes. One of the Four Focus Areas of Effective Altruism (2013) was "The Long-Term Future" and "Far future-focused EAs" are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.
Strong agree from FarmKind’s perspective. An equal bugbear for me is that to the extent EA orgs focus on comms, they’re insufficiently focused on how to communicate to non-EAs. There seems to be a resistance to confront the fact that to grow we need to appeal to normal people and that means speaking to them the way that works for them, rather than what would work for us
I believe we can apply more workframes because our country faces numerous competing needs, yet has limited public funds and capacity. Using scale, neglectedness, and solvability helps government and organisations prioritise programmes that deliver the greatest economic and social return, instead of spreading resources too thinly or relying only on intuition or political parties.
This reading relates to our economy because resources are limited in our country. If we choose the most effective programs, such as health and social support, we can help more people and reduce poverty more quickly. Some actions help many more people than others, so thinking carefully about where support goes can make a bigger difference.
Here's the Unjournal evaluation package
A version of this work has been published in the International Journal of Forecasting under the title "Subjective-probability forecasts of existential risk: Initial results from a hybrid persuasion-forecasting tournament"
We're working to track our impact on evaluated research (see coda.io/d/Unjournal-...) So We asked Claude 4.5 to consider the differences across paper versions, how they related to the Unjournal evaluator suggestions, and whether this was likely to have been causal.
See Claude's report here ...
It does look like most studies suggested small or no effects after less than 10 meters away, but I wonder how much they focused on eggs, larvae and zooplankton, which are plausibly more sensitive. For example, from this study (discussion):
...Experimental air gun signal exposure decreased zooplankton abundance when compared with controls, as measured by sonar (~3–4 dB drop within 15–30 min) and net tows (median 64% decrease within 1 h), and caused a two- to threefold increase in dead adult and larval zooplankton. Impacts were observed out to the maximum 1.2 km
I've found it useful both for posts and for considering research and evaluations of research for Unjournal, with some limitations of course.
- The interface can be a little bit overwhelming as it reports so many different outputs at the same time some overlapping
+ but I expect it's already pretty usable and I expect this to improve.
+ it's an agent-based approach so as LLM models improve you can swap in the new ones.
I'd love to see some experiments with directly integrating this into the EA forum or LessWrong in some ways, e.g. automatically doin...
For Inkhaven, I wrote 30 posts in 30 days. Most of them are not particularly related to EA, though a few of them were. I recently wrote some reflections @Vasco Grilo🔸 thought it might be a good idea to share on the EA Forum; I don't want to be too self-promotional so I'm splitting the difference and posting just a shortform link here:
https://linch.substack.com/p/30-posts-in-30-days
The most EA-relevant posts are probably
https://inchpin.substack.com/p/skip-phase-3
https://inchpin.substack.com/p/aging-has-no-root-cause
https://inchpin.substack.com/p/legi... (read more)