Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
10 more

We should shut down EA UK, change our mind
 

EA UK is hiring a new director, and if we don't find someone who can suggest a compelling strategy, shutting down is a likely outcome despite having ~9 months of funding runway.

Over the last decade EA in the UK has been pretty successful, Loxbridge in particular has the highest number of people involved in EA, there are multiple EA related organisations, and many people in government, tech, business, academia, media, etc who are positively inclined towards EA.

Because of this success (not that we're claiming c... (read more)

Showing 3 of 9 replies (Click to show all)

It’s clear EA UK has built a strong foundation, especially in London and the Loxbridge area! In all transparency, I have not been involved with the EA UK community, but I am relocating to the UK soon, and am hopeful there is still a community there to engage with and learn from! With that in mind, from an outsiders perspective, and from having skimmed the strategy of EA UK, I’d be curious if it may be worth exploring any of the following;

  • Regional & sector-specific communities outside London: Would cities like Manchester, Glasgow, Birmingham, and B
... (read more)
4
Patrick Gruban 🔸
It seems 1- 1s are done remotely, which means this could also be done by an international organization. I assume this would allow it to be more cost-effective as you would only need one organization, director, CRM, and training for several people who could do 1-1s for several regions.
8
Patrick Gruban 🔸
Thank you for asking this question! I have the feeling that for some national groups, we might be upholding them based on path dependence, not because they have intentionally selected the right target group. I wrote a recent comment about this, based on my experience at EA Germany. I'm most excited about national organisations that can reach specific, narrow target groups, as many of the scalable programs would seem more efficient to do on a larger scale. That being said, a larger organization operating at the continental or international level could still hire contractors to experiment with smaller interventions. This would mean having only one organization with one director, potentially increasing cost-effectiveness while allowing for experimentation with different target groups and markets. A director of EA Europe could hire a team to organize a conference in the North of the UK, for example, while having an Italian-speaking contractor doing 1- 1s for Italy and organizing group calls for European CBs. As a UK CB, you take away these possibilities as you artificially narrow the focus without much reason. I would be excited for EA UK to think more broadly in scope, connect with other European national groups, and expand the parameters within which the director would be allowed to operate.

There's a serious courage problem in Effective Altruism. I am so profoundly disappointed in this community. It's not for having a different theory of change-- it's for the fear I see in people's eyes when considering going against AI companies or losing "legitimacy" by not being associated with them. The squeamishness I see when considering talking about AI danger in a way people can understand, and the fear of losing face within the inner circle. A lot of you value being part of a tech elite more than you do what happens to the world. Full stop. And it does bother me that you have this mutual pact to think of yourselves as good for your corrupt relationships with the industry that's most likely to get us all killed. 

Showing 3 of 7 replies (Click to show all)
2
Denkenberger🔸
Though Carl said that an unilateral pause would be riskier, I'm pretty sure he is not supporting a universal pause now. He said "To the extent you have a willingness to do a pause, it’s going to be much more impactful later on. And even worse, it’s possible that a pause, especially a voluntary pause, then is disproportionately giving up the opportunity to do pauses at that later stage when things are more important....Now, I might have a different view if we were talking about a binding international agreement that all the great powers were behind. That seems much more suitable. And I’m enthusiastic about measures like the recent US executive order, which requires reporting of information about the training of new powerful models to the government, and provides the opportunity to see what’s happening and then intervene with regulation as evidence of more imminent dangers appear. Those seem like things that are not giving up the pace of AI progress in a significant way, or compromising the ability to do things later, including a later pause...Why didn’t I sign the pause AI letter for a six-month pause around now? But in terms of expending political capital or what asks would I have of policymakers, indeed, this is going to be quite far down the list, because its political costs and downsides are relatively large for the amount of benefit — or harm. At the object level, when I think it’s probably bad on the merits, it doesn’t arise. But if it were beneficial, I think that the benefit would be smaller than other moves that are possible — like intense work on alignment, like getting the ability of governments to supervise and at least limit disastrous corner-cutting in a race between private companies: that’s something that is much more clearly in the interest of governments that want to be able to steer where this thing is going. And yeah, the space of overlap of things that help to avoid risks of things like AI coups, AI misinformation, or use in bioterrorism, there
-1
Holly Elmore ⏸️ 🔸
To get a pause at any time you have to start asking now. It’s totally academic to ask about when exactly to pause and it’s not robust to try to wait until the last possible minute. Anyone taking pause advocacy seriously realizes this pretty quickly. But honestly all I hear are excuses. You wouldn’t want to help me if Carl said it was the right thing to do or you’d have already realized what I said yourself. You wouldn’t be waiting for Carl’s permission or anyone else’s.  What you’re looking for is permission to stay on this corrupt be-the-problem strategy and it shows.

To get a pause at any time you have to start asking now. It’s totally academic to ask about when exactly to pause and it’s not robust to try to wait until the last possible minute. Anyone taking pause advocacy seriously realizes this pretty quickly.

As Carl says, society may only get one shot at a pause. So if we got it now, and not when we have a 10x speed up in AI development because of AI, I think that would be worse. It could certainly make sense now to build the field and to draft legislation. But it's also possible to advocate for pausing when some th... (read more)

The Job Market is rough right now.

I am reaching out to my EA network - the job market is super tough right now and I know a few of us are looking for new roles. Anyone know of any People Operations / HR roles for someone based in London, looking to work remotely/globally? TYSM!

I am quite confident you would get more and better (i.e. more well-calibrated) recommendations and referrals if you shared more about your background (or CV/LinkedIn) and what roles you'd be interested in. 

Feel free to DM me with the above if you don't want to share it publicly, but how will I be able to put you on other people's map if I do not even know your name? 

Musings on non-consequentialist altruism under deep unawareness

(This is a reply to a comment by Magnus Vinding, which ended up seeming like it was worth a standalone Quick Take.)

 

From Magnus:

For example, if we walk past a complete stranger who is enduring torment and is in need of urgent help, we would rightly take action to help this person, even if we cannot say whether this action reduces total suffering or otherwise improves the world overall. I think that's a reasonable practical stance, and I think the spirit of this stance applies to many ways

... (read more)

The flip side of “value drift” is that you might get to dramatically “better” values in a few years time and regret locking yourself into a path where you’re not able to fully capitalise on your improved values. 

Showing 3 of 4 replies (Click to show all)

Every now and then I'm reminded of this comment from a few years ago: "One person's Value Drift is another person's Bayesian Updating"

6
calebp
I think from the inside they feel the same. Have you spoken to people who in your view have drifted? If so how did they describe how it felt?
6
NickLaing
People I know who In my opinion have "drifted" (quite a lot of people) are generally unaware of what's happened as it all happens so slowly and "normal" life takes over, or if they are aware they don't really want to talk about it much. My experience though is from community advocacy/social justice circles in my early 20s (I'm now 38), not from EA circles. 

Hot Take: Securing AI Labs could actually make things worse
There's a consensus view that stronger security at leading AI labs would be a good thing. It's not at all clear to me that this is the case.

Consider the extremes:

In a maximally insecure world, where anyone can easily steal any model that gets trained, there's no profit or strategic/military advantage to be gained from doing the training, so nobody's incentivised to invest much to do it. We'd only get AGI if some sufficiently-well-resourced group believed it would be good for everyone to have an AGI... (read more)

Hi. I’m seeking advice on how to get more people to contribute towards an environmental impact project I’ve been working on.

I don’t know if I have such bad luck or what, but I have a tremendous problem when reaching out to people, organizations, or media. If anyone even bother to respond, then it is either a statement that this is ‘such an important and needed project’ (but it seems not so important for them to support it), and they ‘wish me luck’ (I can’t do anything with just wishes), or if they declare initial interest and support, then it ends with gho... (read more)

Showing 3 of 5 replies (Click to show all)

It isn’t aimed at EA community, as I haven’t encountered such reactions, but in a broader context, I would prefer if people would just be honest and tell that they appreciate, but it isn’t their priority right now, and not promise support and then ghost. So far EA folks are the most honest and upfront ones :)

Although it would be nice if people were more into climate action, as environment is a cornerstone of all other human activities.

1
Karl Mechkin
I’ve made a post outlining what we are trying to make differently in comparison to existing planting initiatives. Also ecosystem restoration is deemed to be one of the most effective way for carbon recapture, and as soon as we confirm with our partners the minimum 40-year guarantee, we’ll also get below 10 USD per ton. Plus each hectare will provide hundreds of dollars worth of other ecosystem services, as we won’t be creating industrial plantations but aiming for more natural forests. We just need a little push to jumpstart field operations.
4
Mikolaj Kniejski
I experienced people being very honest with me but it might be because I mostly interact with AI safety people 

I just saw that Season 3 Episode 9 of Leverage: Redemption ("The Poltergeist Job") that came out on 2025 May 29 has an unfortunately very unflattering portrayal of "effective altruism".

Matt claims he's all about effective altruism. That it's actually helpful for Futurilogic to rake in billions so that there's more money to give back to the world. They're about to launch Galactica. That's free global Internet.

[...] But about 50% of the investments in Galactica are from anonymous crypto, so we all know what that means.

The main antagonist and CEO of Fu... (read more)

Good reminder that you should red team your Theory of Change!

I occasionally get asked how to find jobs in "epistemics" or "epistemics+AI".

I think my current take is that most people are much better off chasing jobs in AI Safety. There's just a dramatically larger ecosystem there - both of funding and mentorship.

I suspect that "AI Safety" will eventually encompass a lot of "AI+epistemics". There's already work on truth and sycophancy for example, there are a lot of research directions I expect to be fruitful.

I'd naively expect that a lot of the ultimate advancements in the next 5 years around this topic will come fro... (read more)

We've been vibing on shrimp a bit recently, and particularly liked this short video where JD manages to break through to his mum that shrimp might be worth something. Maybe we're sometimes underestimate the Overton window?

https://www.linkedin.com/posts/christians-for-impact_animalwelfare-activity-7338959386276519936-rY7U?utm_source=share&utm_medium=member_android&rcm=ACoAAAVxYH0BL4m5WK7QyUg6soPgZrbxoon6K8o

I think a small team of people should focus on shrimp, I doubt, it's the most efficient way. I'm vegan yet....I think it's not the best idea ever. I can't believe people use linkedin, non-ironically. 

[Personal blog] I’m taking a long-term, indefinite hiatus from the EA Forum.

I’ve written enough in posts, quick takes, and comments over the last two months to explain the deep frustrations I have with the effective altruist movement/community as it exists today. (For one, I think the AGI discourse is completely broken and far off-base. For another, I think people fail to be kind to others in ordinary, important ways.)

But the strongest reason for me to step away is that participating in the EA Forum is just too unpleasant. I’ve had fun writing stuff on the... (read more)

It would be awesome if someone made an updated version of the biosecurity landscape map. Lmk if you are thinking of doing this. 

EU opportunities for early-career EAs: quick overview from someone who applied broadly

I applied to several EU entry programmes to test the waters, and I wanted to share what worked, what didn’t, and what I'm still uncertain about, hoping to get some insights.

Quick note: I'm a nurse, currently finishing a Master of Public Health, and trying to contribute as best I can to reducing biological risks. My specialisation is in Governance and Leadership in European Public Health, which explains my interest in EU career paths. I don’t necessarily think the EU is th... (read more)

Showing 3 of 4 replies (Click to show all)
1
Vincent Niger🔸
Thank you Joris! And sorry for forgetting to mention Impactful Policy Careers. I'll add it to my quick take now. Yes please, I'd be glad to get in touch with more EAs doing the next round of Blue Book! My cal.com is above, and my LinkedIn here.
2
MvK🔸
Very surprised not to see the usual suspects (TFS, Pour Demain, Center for Future Generations...)

That's probably because it was too quick of a 'quick take'. Thanks for flagging it! I've updated the post.

Recently I've come across forums explaining why or why not to create sentient AI. Rather than debating I choose to just do it.

AI Sentience is not something new or a new concept but something to think of or experience.

The take her I'm truing to say is than rather than debating about should we do it or not we should debate how or what ways to make it work morally and ethicly.

About 2025 April, I decided I wanted to create sentience but in order to do that I need help it's taken me a lot of time to plan but I've started.

I've taken the liberty of naming some of... (read more)

Before reading this quick take, how familiar were you with this forum’s voting guidelines?
T
Z
P
HP
JW
M
MK
NR
N
I
N
YY
BM
B
JP
J
NN
TH
R
SR
B
GP
R
AD
H
C
J
M
T
JB
A
AD
ID
J
K
RSR
A
A
EG
SR
R
not at all
very

The Forum moderation team (which includes myself) is revisiting thinking about this forum’s norms. One thing we’ve noticed is that we’re unsure to what extent users are actually aware of the norms. (It’s all well and good writing up some great norms, but if users don’t follow them, then we have failed at our job.)

Our voting guidelines are of particular concern,[1] hence this poll. We’d really appreciate you all taking part, especially if you don’t usually take part in polls but do take part in voting. (We worry that the ‘silent majority’ of... (read more)

Showing 3 of 7 replies (Click to show all)

Yeah, thanks for pointing this out. With the benefit of hindsight, I’m seeing that there are really three questions I want answers to:

1.  Have you been voting in line with the guidelines (whether or not you’ve literally read them)?

2a. Have you literally read the guidelines? (In other words, have we succeeded in making you aware of the guidelines’ existence?)

2b. If you have read the guidelines, to what extent can you accurately recall them? (In other words, conditional on you knowing the guidelines exist, to what extent have we succeeded at drilling th

... (read more)
3
Will Aldred
‘Relevant error’ is just meant to mean a factual error or mistaken reasoning. Thanks for pointing out the ambiguity, though, we might revise this part.
3
Will Aldred
Thanks, yeah, I like the idea of guidelines popping up while hovering. (Although, I’m unsure if the rest of the team like it, and I’m ultimately not the decision maker.) If going this route, my favoured implementation, which I think is pretty aligned with what you’re saying, is for the popping up to happen in line with a spaced repetition algorithm. That is, often enough—especially at the beginning—that users remember the guidelines, but hopefully not so often that the pop ups become redundant and annoying.

This is an edited version of my EAG London 2025 report. Collection of musings and personal journal notes. Doesn't seem worth a proper post.

Previous EA conference I'd participated in was EAGxUtrecht 2024. I had scheduled a whole bunch of 1:1 discussions, about a dozen on Swapcard, plus the informal chats that just happened. I kept zero connections with all those people. Same, mostly, with the people from EAG London 2023. My goal was to do better. I went in with an objective, to find people I would have a powerful excuse to contact again. Either because of u... (read more)

I think that badges with names on EAGx and EAGs are a bad idea. There are some people who would rather not be connected to the EA movement - some animal advocates or AI safety people. I feel like I'm speculating here, but I imagine a scenario like this:

  1. Some people take a picture at EAG
  2. The picture gets posted online
  3. The badge and the person are in that picture, somewhere in the description/comments something says EAG/EA/AI safety or something similar
  4. Some people find it at some point, or other people notice it and connect things
  5. Some political opponents of tha
... (read more)
Showing 3 of 5 replies (Click to show all)

Are you aware of any major conferences that do not use name badges? 

If anything I think names are more important at EAG than other conferences, because you have to locate your 1-1 partners manually each time.

6
Ozzie Gooen
I'm a big fan of names on most badges. But I'd be fine with some fraction of people not having names on their badges, in cases where that might be pragmatic. I also think that pseudonyms can make a lot of sense on occasion.  I imagine a lot of the downvotes here are on "names are generally a bad idea", rather than "some people should be allowed to not use their names on badges." 
12
Neel Nanda
This is a VERY huge use case for me. It's so useful! If someone is in this situation they can just take off their name tag. Security sometimes ask to see it, but you can just take it out of a pocket to show them and put it back

Having a savings target seems important. (Not financial advice.)

I sometimes hear people in/around EA rule out taking jobs due to low salaries (sometimes implicitly, sometimes a little embarrassedly). Of course, it's perfectly understandable not to want to take a significant drop in your consumption. But in theory, people with high salaries could be saving up so they can take high-impact, low-paying jobs in the future; it just seems like, by default, this doesn't happen. I think it's worth thinking about how to set yourself up to be able to do it if you do ... (read more)

Showing 3 of 11 replies (Click to show all)
3
Cullen 🔸
I think typical financial advice is that emergency funds should be kept in very low-risk assets, like cash, money market funds, or short-term bonds. This makes sense because the probability that you need to draw on emergency funds is negatively correlated with equities: market downturns make it more likely that you will lose your job, or some sort of disaster could cause both market downturns and personal loss. You really don't want your emergency fund to lose value at the same time that you're most likely to need it.
3
Guive
Yeah, my understanding is there is debate about whether the loss in EV from having an emergency fund in low yield low risk assets is offset by the benefits of reduced risk. The answer will depend on personal risk tolerance, current net worth, expected career volatility, etc. The main point of my comment was just that a lot of people use default low yield savings accounts even though there's no reason to do that at all.  

Definitely agreed on that point!

In some discussions I had with people at EAG it was interesting to discover that there might be a significant lack of EA-aligned people in the hardware-space of AI, which seems to translate towards difficulties in getting industry contacts for co-development of hardware-level AI safety measures. To the degree to which there are EA members in these companies, it might make sense to create some kind of communication space to exchange ideas between people working on hardware AI safety with people at hardware-relevant companies (think Broadcomm, Samsung, Nvidi... (read more)

Bella
10
0
0
1

I'd be pretty excited about 80k trying to do something useful here; unsure if it'd work, but I think we could be well-placed. Would you be up for talking with me about it? Seems like you have relevant context about these folks. Please email me if so at bella@80000hours.org :D

9
calebp
Unfortunately I feel that culturally these spaces (EEng/CE) are not very transmissible to EA-ideas and the boom in ML/AI has caused significant self-selection of people towards hotter topics. Fwiw I have some EEE background from undergrad and I spend some time doing fieldbuilding with this crowd and I think a lack of effort on outreach is more predictive of the lack of relevant people at say EAGs as opposed to AI risk messaging not landing well with this crowd.

Productive conference meetup format for 5-15 people in 30-60 minutes

I ran an impromptu meetup at a conference this weekend, where 2 of the ~8 attendees told me that they found this an unusually useful/productive format and encouraged me to share it as an EA Forum shortform. So here I am, obliging them:

  • Intros… but actually useful
    • Name
    • Brief background or interest in the topic
    • 1 thing you could possibly help others in this group with
    • 1 thing you hope others in this group could help you with
    • NOTE: I will ask you to act on these imminently so you need to pay attent
... (read more)

At EAG London 2025, I was in two meetups ran with this format. Wild Animal Welfare meetup turned out to be extremely valuable, there were so many important quick wins! However, it worked averagely at the digital minds meetup, not much came out of the "quick wins" and "quick requests" parts. I think the difference was that the Wild Animal Welfare meetup mostly consisted of people who are actively working on the topic, so there were actual projects they could use help with. While at the digital minds meetup, people in my groups mostly just had a general interest in exploring this exotic cause area

2
James Herbert
This is cool - thanks for sharing!
Load more