Have you read AIM (formerly Charity Entrepreneurship) material? They have a book out on starting a non-profit. If you read that and present this idea either absorbing lessons from that or clearly arguing why this idea is still good, I think that might make it easier for readers here to assess your idea and potentially consider joining. As well as some more detail on your background perhaps. I am a bit sad that you get downvoted as a newcomer - we want people to join and be agentic which is exactly what you are doing.
This resonates a lot. I’m keen to connect with others who are actively thinking about when it becomes justified to hand off specific parts of their work to AI.
Reading this, it seems like the key discovery wasn’t “Claude is good at critique in general,” but that a particular epistemic function — identifying important conceptual mistakes in a text — crossed a reliability threshold. The significance, as I read it, is that you can now trust Claude roughly like a reasonable colleague for spotting such mistakes, both in your own drafts and in texts you rely on a...
Have anyone seen any of the following?
1 - EA orgs skipping tests/trials on candidates and instead using candidate performance on tests/trials from other EA orgs? The closest I get is the "top candidate" tag in the HIP database that some EA orgs send
2 - Have you seen hints of top talent applying to less positions due to "test/trial burn-out"? I think this might potentially especially severe for top talent, as they often get to test/trial stages, and might be doing back-to-back tests and trials for weeks or months on end (and for mid career professionals in ...
Ah, now I see - thanks for clarifying. Yes historically I do not know how much each set-back to nuclear mattered. I can see that e.g. constantly changing regulation, for example during builds (which I think Isabelle actually mentioned) could cause a significant hurdle for continuing build-out. Here I would defer to other experts like you and Isabelle.
Porting this over to "we might over regulate AI too", I am realizing it is actually unclear to me whether people who use the "nuclear is over regulated" example means the literal same "historical" thing could ...
Good question. I agree: people in EA who’ve actually worked on nuclear don’t usually claim over-regulation is the only or even dominant driver of the cost/buildout problem.
What I’m reacting to is more the “hot take” version that shows up in EA-adjacent podcasts — often as an analogy when people talk about AI policy: “look at nuclear, it got over-regulated and basically died, so don’t do that to AI.” In that context it’s not argued carefully, it’s just used as a rhetorical example, and (to me) it’s a pretty lossy / misleading compression of what’s going on....
What I’m reacting to is more the “hot take” version that shows up in EA-adjacent podcasts — often as an analogy when people talk about AI policy: “look at nuclear, it got over-regulated and basically died, so don’t do that to AI.” In that context it’s not argued carefully, it’s just used as a rhetorical example, and (to me) it’s a pretty lossy / misleading compression of what’s going on.
I agree it's a bit lossy and sometimes reflexive (this is what I meant with relying on libertarian priors), but I am still confused about your argument.
Because the ar...
Good question — I think it’s mostly untrue as commonly used. It implies regulation is the main bottleneck, but as the podcast lays out, there are likely much better levers for driving down cost. So it’s both misleading and counterproductive as a talking point, even if you’re broadly pro-nuclear (which I and the podcast guest are).
This is genuinely incredibly impressive — a proof point that a small, dedicated team can create meaningful x-risk reduction impact through "policy" (e.g. if scientific consensus is a precursor to policy action). If so, subsequent progress here may also be relatively cost-effective: compared to stockpiles or hard infrastructure, the marginal public spend to adopt guidance and implement early measures could be low.
Also: I think this is extra impressive because my (anecdotal) experience is that many people in mainstream bio who hear “mirror bio” dismiss it as a non-issue — so shifting scientific consensus here seems like a significant achievement.
I’m pro-nuclear, but the commonly used EA framing of “nuclear is overregulated” seems net negative more often than not. Clearer Thinking’s new nuclear episode is one of the more epistemically rigorous discussions I’ve heard in EA-adjacent spaces (and Founders Pledge has also done nuanced work).
Nuclear is worth pursuing, but we should argue for it clear-eyed.
My read was that a major success was that they seem to have broad, initial agreement, even among previously bullish scientists, that we should be extremely cautious when developing the scaffolding of mirror bio, if at all. I think that is truly remarkable, borderline historic. This is agreement across national borders, scientific disciplines and the argument they put forward was not watertight - there was no definite proof that mirror bio would assuredly be catastrophic. So this consensus was built on plausible risk only. It was extremely well pulled off. It is what skeptics might easily and still do dismiss as "sci-fi".
I ran this very lightweight poll and super crudely (probably massive sampling bias) 4 out of 9 EAs residing in the US considered moving abroad.
Naïve question: Do you know if there is data on YouTube's potential to convert to highly engaged EAs that would not otherwise convert? I think YouTube is worth testing, but if there is little data already I would be interested to see anything on conversion or even proxies for it. I know 80k hrs is rigorous so they probably have some hypothesis it can work out, or maybe they have hard evidence.
I would really recommend to look into pre-schools in the Nordics. They have high sickness rates and importantly: The government pays parents to stay home with sick kids. Even a 5% reduction in absence is worth millions and the government explicitly asks for solutions to this.
But there is more, anyone can set up a nursery, and the authorities track absence rates across pre-schools (I know, because kids who are immunocompromised get preference in pre-schools with the lowest absence rates). Setting up one's own pre-school is paid for by the state - they...
This warms my heart, thanks for writing Julia! A note from a dad trying to be supportive: I also want to acknowledge the mothers that let dads take care of the kids their own way. While it is not possible to generalize, having observed dads with children, at least here in Scandinavia, they might do things differently. Letting fathers parent their own way and trusting them makes it much easier for dads to care for children. Someone mentioned interest in taking care of kids - this interest can be increased, in my experience drastically, by letting fathers ta...
To be clear, I think there is absolutely no intention of doing this. EA existed before AI became hot, and many EAs have expressed concerns about the recent, hard pivot towards AI. It seems in part, maybe mostly (?), to be a result of funding priorities. In fact, a feature of EA that hopefully makes it more immune than many impact focused communities to donor influence (although far from total immunity!) is the value placed on epistemics - decisions and priorities should be argued clearly and transparently, why AI should take priority over other cause areas. Glad to have you engage skeptically on this!
Love this framing — in my own EA work I’ve found that leaning into boldness in marketing outperforms caution. Still, I’d be really curious if anyone has data on how coolness affects downstream outcomes — not just reach, but who we attract and any data that might indicate how it shapes culture over time.
I sometimes do informal background or reference checks on "semi-influential" people in and around EA. A couple of times I decided not to get too close — nothing dramatic, just enough small signals that stepping back felt wiser. (And to be fair, I had solid alternatives; with fewer options, one might reasonably accept more risk.)
I typically don’t ask for curated references, partly because it feels out of place outside formal hiring and partly because I’m lazy — it’s much quicker to ask a trusted friend ...
Very good point on coming new to EA. Maybe hearing about different cause areas in an intro workshop then landing here and wondering if it is the Alignment Forum. It might even feel a bit like bait and switch? If this is a recurring theme for newcomers to EA, this is something that should be looked at. Not sure if anyone is tracking the funnel of onboarding into EA? If so, one might see people being interested initially, then dropping off when they meet a "wall of AI".
I’m skeptical that corporate AI safety commitments work like @Holden Karnofsky suggests. The “cage-free” analogy breaks: one temporary defector can erase ~all progress, unlike with chickens.
I'm less sure about corporate commitments to AI safety than Karnofsky. In the latest 80k hrs podcast episode, Karnofsky uses the cage free example of why it might be effective to push frontier AI companies on safety. I feel the analogy might fail in potentially a significant way in that the analogy breaks in terms of how many companies need to be convinced:
-For cage fre...
I like the idea of just accepting it as moral imperfection rather than rationalizing it as charity — thanks for challenging me! One benefit of framing it as imperfection is that it helps normalize moral imperfection, which might actually be net positive for the most dedicated altruists, since it could help prevent burnout or other mental strain.
Still, I’m not completely decided. I’m unclear about cases where someone needs to use their runway:
A. They might have chosen not to build runway and instead donated effectively, and then later, when needing runway, ...
Thanks for posting this — I came to similar conclusions during a recent strategy sprint for a small org transitioning off major-donor dependence.
One thing I tried to push further was: how can small orgs actually operationalize this tradeoff? A few concrete ideas that might help others:
Just to add my personal experience, if you might be planning direct work, especially entrepreneurship and/or might want to have children - a personal runway has served me well. Not sure if this is stretching the "giving 10%" too far, but you could mentally consider it donated and in case you don't need it later, you can donate it then. I think at least 12 months of runway at your anticipated future expenses might be the right level (so not a student expense, but if you might want children, accounting for all related expenses). Another situation that could ...
Yesssss!!!! I am trying it right away. I also think for many here, timing is important to set limits. Like cap your work week at 50 or at most 60 hours (or less if you have caretaking responsibilities). That way you don't let guilt push you into unhealthy territory. That's how I use timers. Also great for parents that are both ambitious to make sure one does not get a career advantage by feeling more nervous or something.
I agree. Reading your comment made me think that it might be interesting — even if just as a small experiment — to map out which historical figures we feel struck the ~right balance between ambition and caution.
I don’t know if it would reveal much, but perhaps reading about a few such people could help me (and maybe others) better calibrate our own mix of drive and risk averseness. I find it easier to internalize these balances through real people and stories than through abstract arguments. And perhaps that kind of reflection could, in perhaps only a small way, help prevent future crises of judgment like FTX.
Perhaps mentioned elsewhere here, but if we look for precedent for people doing an enormous amount of good (I can only think of Stanislav Petrov and people making big steps in curing disease), these actually did not act recklessly I think. It seems more like they persistently applied themselves to a problem, not super forcing an outcome and aligning a lot with others (like those eradicating smallpox). So if one wants a hero mindset, it might be good to emulate actual heroes we both think did a lot of good and that also reduced the risk of their actions.
I think there are examples supporting many different approaches and it depends immensely on what you're trying to do, the levers available to you and the surrounding context. E.g. in the more bold and audacious, less cooperative direction, Chiune Sugihara or Osckar Schindler come to mind. Petrov doesn't seem like a clear example in the "non-reckless" direction, and I'd put Arkhipov in a similar boat (they both acted rapidly under uncertainty in a way the people around them disagreed with, and took responsibility for a whole big situation when it probably would have been very easy to say to themselves that it wasn't their job to do things other than obey orders and go with the group).
I am really sorry to hear that it got this bad. I must admit I did not actually consider the diversity of our community's experiences when crafting this poll, and instead wrote just quickly, knee-jerk from a white, het-cis, male perspective but you point out that the situation might be much worse for people affected more directly by the aspects you point out and might also extend to reproductive rights and more. I really hope you will soon find a place where you are safe and I feel a bit inadequate for not having capacity to do more than write these words.
A proposal for an "Anonymity Mediator" ("AM") in EA. This would be a person that mostly would strip identity from information. For example, if person A has information about an EA (person B) enabling dangerous work at a big AI lab, the AM would be someone person A could connect with, giving extremely minimal information in a highly secure way (ideally in-person with no devices). The AM would then be able to alert people that perhaps should know, with minimal chance of person A's identity being revealed. I would love to see a post for a proposal for such a person and if it seems helpful (community issues, information security, etc.) maybe a way to make progress on funding and finding such a person.
A combined guide for EA talent to move to stable democracies and a call to action for EA hubs in such countries to explore facilitating such moves. I know there are people working on making critical parts of the EA ecosystem less US-centric. It might be that I am missing other work in this direction but I think this is a good time for EA hubs in e.g. Switzerland and the Nordics to see if they can help make EA more resilient when it might be needed in possibly rough times ahead. Perhaps also preparing for sudden influxes of people, or facilitate more rapid support in case things start to change quickly.
(Let me know if this is spamming) Non-US EA hubs might be interested in especially US talent considering moving out of the US - small, imperfect poll here: https://forum.effectivealtruism.org/posts/mEXfYXDFEhsEPy5yN/poll-have-your-views-on-moving-abroad-changed-in-the-last-12
Since you are pursuing E2G, you might actually want to let your job search dictate your choice of city - just an idea. There are several good contenders, and flight between cities in Europe is cheap. Berlin and Stockholm have good tech scenes if you are thinking of joining a start-up early. Otherwise you might just want to look for jobs across the top EA cities, and pick the one where you find the highest wage. Depending on your AI timelines, you might or might not want to consider career progression - something like how many CS jobs are there in total in the city - and HQs/large offices of any large tech companies with high wages.
U.S. citizens or green card/work permit holders – started considering moving abroad in the last 12–24 months
Community > Epistemics
Community is more important to EA than epistemics. What drives EA's greater impact isn’t just reasoning, but collaboration. Twenty “90% smart” people are much more likely identify more impactful interventions than two “100% smart” people.
I may be biased by how I found EA—working alone on “finding most impactful work” before stumbling into the EA community—but this is the point: EA isn’t unique for asking, “How can I use reason to find the most impactful interventions?” Others ask that too. EA is unique because it gathers those people, and facilitates funding and coordination, enabling far more careful and comprehensive work.
I'm not so sure, there are quite a lot of groups that gather together, but not as many that trade off the community side in favour of epistemics (I imagine EA could be much bigger if it focused more on climate or other less neglected areas).
I also wouldn't use the example of 20 vs 2, but with 10,000 people with average epistemics vs 1,000 with better epistemics I'd predict the better reasoning group would have more impact.
I have not contemplated deeply the meaning of Rethink Priority's findings on cross cause prioritization, but my perhaps shallow understanding was that despite somewhat high likelihood of AI catastrophe arriving quite soon, "traditional" animal welfare looked good in expectation. I think the point was something like despite quite high chances of AI catastrophe, the even higher chances (but far from 100%) of survival means in expectation animal welfare looks very good. So while it is not guaranteed animal welfare interventions will pay off due to an interven...
One non-expert idea here is to assume that all the building blocks of mirror bacteria exist - what would it take then to create effective mirror phages? Is there any way we can make progress on this already, without those building blocks, but knowing roughly what they are? And in a defense favoring way? Again I would really align with other biosec folks on this at OP, Blueprint and MBDF as I feel very hesitant about unilateral actions. But something like this might have legs, especially if some plausible work can be outlined that can be done with current techniques.
Hi Nnaemeka, yeah I totally agree about not doing something potentially advancing the creation of dangerous mirror organisms. I am commenting just to iterate what I said about "defense-favoring" - I know little of microbiology but thought I would mention just in case there might be some way to very lightly modify an existing non-mirror phage to "hunt and kill" mirror microbes (e.g. just altering their "tracking" and "ingestion" system). But this is probably an incredibly naive idea but thought I would put it out there as there is a whole chapter on phages ...
I know little of microbiology, but I know there is some focus on mirror bacteria. One possible pivot that could attract funding would be to look if phages can be made to track and consume mirror bacteria. This is a super speculative idea, but I think there might be some funding for defenses against mirror life. Perhaps you have already looked at the detailed report on mirror life published at the end of last year (my non-expert read was that it was believed phages would not work - but maybe it is possible to make "mirror phages" in a defense-favoring way)?
One point I have raised earlier: If one is worried about neocolonialism, reducing the risk from powerful technology might look like a better option. It is clear that the global south is bearing a disproportionate burden from fossil fuel burning by rich nations. Similarly, misuse or accidents in nuclear, biotechnology and/or AI might also cause damage to people who had little say in how these technologies were being rolled out. Especially preventing nuclear winter seems like something that would disproportionately affect poor people, but I think AI Safety and Biosecurity are likely candidates for lowering the risk of perpetuating colonial dynamics as well.
As Brad points out, even now, and with some (high?) likelihood in the near future, EA will be begging for people to start new things. So please disregard downvotes. Instead, if you think you can pull this off and have credentials, just take tips like mine, Brad's and others' and see downvotes as "not ready yet", and do not interpret it as "a project similar to this is not worthwhile". There are tons of people right now working on starting new things, and this will only accelerate as the need for it is large.
And criticism is on the EA community if we make p... (read more)