EA UK is hiring a new director, and if we don't find someone who can suggest a compelling strategy, shutting down is a likely outcome despite having ~9 months of funding runway.
Over the last decade EA in the UK has been pretty successful, Loxbridge in particular has the highest number of people involved in EA, there are multiple EA related organisations, and many people in government, tech, business, academia, media, etc who are positively inclined towards EA.
Because of this success (not that we're claiming c...
It’s clear EA UK has built a strong foundation, especially in London and the Loxbridge area! In all transparency, I have not been involved with the EA UK community, but I am relocating to the UK soon, and am hopeful there is still a community there to engage with and learn from! With that in mind, from an outsiders perspective, and from having skimmed the strategy of EA UK, I’d be curious if it may be worth exploring any of the following;
There's a serious courage problem in Effective Altruism. I am so profoundly disappointed in this community. It's not for having a different theory of change-- it's for the fear I see in people's eyes when considering going against AI companies or losing "legitimacy" by not being associated with them. The squeamishness I see when considering talking about AI danger in a way people can understand, and the fear of losing face within the inner circle. A lot of you value being part of a tech elite more than you do what happens to the world. Full stop. And it does bother me that you have this mutual pact to think of yourselves as good for your corrupt relationships with the industry that's most likely to get us all killed.
To get a pause at any time you have to start asking now. It’s totally academic to ask about when exactly to pause and it’s not robust to try to wait until the last possible minute. Anyone taking pause advocacy seriously realizes this pretty quickly.
As Carl says, society may only get one shot at a pause. So if we got it now, and not when we have a 10x speed up in AI development because of AI, I think that would be worse. It could certainly make sense now to build the field and to draft legislation. But it's also possible to advocate for pausing when some th...
I am quite confident you would get more and better (i.e. more well-calibrated) recommendations and referrals if you shared more about your background (or CV/LinkedIn) and what roles you'd be interested in.
Feel free to DM me with the above if you don't want to share it publicly, but how will I be able to put you on other people's map if I do not even know your name?
Musings on non-consequentialist altruism under deep unawareness
(This is a reply to a comment by Magnus Vinding, which ended up seeming like it was worth a standalone Quick Take.)
From Magnus:
...For example, if we walk past a complete stranger who is enduring torment and is in need of urgent help, we would rightly take action to help this person, even if we cannot say whether this action reduces total suffering or otherwise improves the world overall. I think that's a reasonable practical stance, and I think the spirit of this stance applies to many ways
Every now and then I'm reminded of this comment from a few years ago: "One person's Value Drift is another person's Bayesian Updating"
Hot Take: Securing AI Labs could actually make things worse
There's a consensus view that stronger security at leading AI labs would be a good thing. It's not at all clear to me that this is the case.
Consider the extremes:
In a maximally insecure world, where anyone can easily steal any model that gets trained, there's no profit or strategic/military advantage to be gained from doing the training, so nobody's incentivised to invest much to do it. We'd only get AGI if some sufficiently-well-resourced group believed it would be good for everyone to have an AGI...
Hi. I’m seeking advice on how to get more people to contribute towards an environmental impact project I’ve been working on.
I don’t know if I have such bad luck or what, but I have a tremendous problem when reaching out to people, organizations, or media. If anyone even bother to respond, then it is either a statement that this is ‘such an important and needed project’ (but it seems not so important for them to support it), and they ‘wish me luck’ (I can’t do anything with just wishes), or if they declare initial interest and support, then it ends with gho...
It isn’t aimed at EA community, as I haven’t encountered such reactions, but in a broader context, I would prefer if people would just be honest and tell that they appreciate, but it isn’t their priority right now, and not promise support and then ghost. So far EA folks are the most honest and upfront ones :)
Although it would be nice if people were more into climate action, as environment is a cornerstone of all other human activities.
I just saw that Season 3 Episode 9 of Leverage: Redemption ("The Poltergeist Job") that came out on 2025 May 29 has an unfortunately very unflattering portrayal of "effective altruism".
Matt claims he's all about effective altruism. That it's actually helpful for Futurilogic to rake in billions so that there's more money to give back to the world. They're about to launch Galactica. That's free global Internet.
[...] But about 50% of the investments in Galactica are from anonymous crypto, so we all know what that means.
The main antagonist and CEO of Fu...
I occasionally get asked how to find jobs in "epistemics" or "epistemics+AI".
I think my current take is that most people are much better off chasing jobs in AI Safety. There's just a dramatically larger ecosystem there - both of funding and mentorship.
I suspect that "AI Safety" will eventually encompass a lot of "AI+epistemics". There's already work on truth and sycophancy for example, there are a lot of research directions I expect to be fruitful.
I'd naively expect that a lot of the ultimate advancements in the next 5 years around this topic will come fro...
We've been vibing on shrimp a bit recently, and particularly liked this short video where JD manages to break through to his mum that shrimp might be worth something. Maybe we're sometimes underestimate the Overton window?
[Personal blog] I’m taking a long-term, indefinite hiatus from the EA Forum.
I’ve written enough in posts, quick takes, and comments over the last two months to explain the deep frustrations I have with the effective altruist movement/community as it exists today. (For one, I think the AGI discourse is completely broken and far off-base. For another, I think people fail to be kind to others in ordinary, important ways.)
But the strongest reason for me to step away is that participating in the EA Forum is just too unpleasant. I’ve had fun writing stuff on the...
I applied to several EU entry programmes to test the waters, and I wanted to share what worked, what didn’t, and what I'm still uncertain about, hoping to get some insights.
Quick note: I'm a nurse, currently finishing a Master of Public Health, and trying to contribute as best I can to reducing biological risks. My specialisation is in Governance and Leadership in European Public Health, which explains my interest in EU career paths. I don’t necessarily think the EU is th...
Recently I've come across forums explaining why or why not to create sentient AI. Rather than debating I choose to just do it.
AI Sentience is not something new or a new concept but something to think of or experience.
The take her I'm truing to say is than rather than debating about should we do it or not we should debate how or what ways to make it work morally and ethicly.
About 2025 April, I decided I wanted to create sentience but in order to do that I need help it's taken me a lot of time to plan but I've started.
I've taken the liberty of naming some of...
The Forum moderation team (which includes myself) is revisiting thinking about this forum’s norms. One thing we’ve noticed is that we’re unsure to what extent users are actually aware of the norms. (It’s all well and good writing up some great norms, but if users don’t follow them, then we have failed at our job.)
Our voting guidelines are of particular concern,[1] hence this poll. We’d really appreciate you all taking part, especially if you don’t usually take part in polls but do take part in voting. (We worry that the ‘silent majority’ of...
Yeah, thanks for pointing this out. With the benefit of hindsight, I’m seeing that there are really three questions I want answers to:
...1. Have you been voting in line with the guidelines (whether or not you’ve literally read them)?
2a. Have you literally read the guidelines? (In other words, have we succeeded in making you aware of the guidelines’ existence?)
2b. If you have read the guidelines, to what extent can you accurately recall them? (In other words, conditional on you knowing the guidelines exist, to what extent have we succeeded at drilling th
This is an edited version of my EAG London 2025 report. Collection of musings and personal journal notes. Doesn't seem worth a proper post.
Previous EA conference I'd participated in was EAGxUtrecht 2024. I had scheduled a whole bunch of 1:1 discussions, about a dozen on Swapcard, plus the informal chats that just happened. I kept zero connections with all those people. Same, mostly, with the people from EAG London 2023. My goal was to do better. I went in with an objective, to find people I would have a powerful excuse to contact again. Either because of u...
I think that badges with names on EAGx and EAGs are a bad idea. There are some people who would rather not be connected to the EA movement - some animal advocates or AI safety people. I feel like I'm speculating here, but I imagine a scenario like this:
Having a savings target seems important. (Not financial advice.)
I sometimes hear people in/around EA rule out taking jobs due to low salaries (sometimes implicitly, sometimes a little embarrassedly). Of course, it's perfectly understandable not to want to take a significant drop in your consumption. But in theory, people with high salaries could be saving up so they can take high-impact, low-paying jobs in the future; it just seems like, by default, this doesn't happen. I think it's worth thinking about how to set yourself up to be able to do it if you do ...
In some discussions I had with people at EAG it was interesting to discover that there might be a significant lack of EA-aligned people in the hardware-space of AI, which seems to translate towards difficulties in getting industry contacts for co-development of hardware-level AI safety measures. To the degree to which there are EA members in these companies, it might make sense to create some kind of communication space to exchange ideas between people working on hardware AI safety with people at hardware-relevant companies (think Broadcomm, Samsung, Nvidi...
I'd be pretty excited about 80k trying to do something useful here; unsure if it'd work, but I think we could be well-placed. Would you be up for talking with me about it? Seems like you have relevant context about these folks. Please email me if so at bella@80000hours.org :D
Productive conference meetup format for 5-15 people in 30-60 minutes
I ran an impromptu meetup at a conference this weekend, where 2 of the ~8 attendees told me that they found this an unusually useful/productive format and encouraged me to share it as an EA Forum shortform. So here I am, obliging them:
At EAG London 2025, I was in two meetups ran with this format. Wild Animal Welfare meetup turned out to be extremely valuable, there were so many important quick wins! However, it worked averagely at the digital minds meetup, not much came out of the "quick wins" and "quick requests" parts. I think the difference was that the Wild Animal Welfare meetup mostly consisted of people who are actively working on the topic, so there were actual projects they could use help with. While at the digital minds meetup, people in my groups mostly just had a general interest in exploring this exotic cause area