This is a special post for quick takes by Aaron_Scher. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Progressives might be turned off by the phrasing of EA as "helping others." Here's my understanding of why. Speaking anecdotally from my ongoing experience as a college student in the US, mutual aid is getting tons of support among progressives these days. Mutual aid involves members of a community asking for assistance (often monetary) from their community, and the community helping out. This is viewed as a reciprocal relationship in which different people will need help with different things and at different times from one another, so you help out when you can and you ask for assistance when you need it; it is also reciprocal because benefiting the community is inherently benefiting oneself. This model implies a level field of power among everybody in the community. Unlike charity, mutual aid relies on social relations and being in community to fight institutional and societal structures of oppression (https://ssw.uga.edu/news/article/what-is-mutual-aid-by-joel-izlar/).

"[Mutual Aid Funds] aim to create permanent systems of support and self-determination, whereas charity creates a relationship of dependency that fails to solve more permanent structural problems. Through mutual aid networks, everyone in a community can contribute their strengths, even the most vulnerable. Charity maintains the same relationships of power, while mutual aid is a system of reciprocal support." (https://williamsrecord.com/376583/opinions/mutual-aid-solidarity-not-charity/).

Within this framework, the idea of "helping people" often relies on people with power aiding the helpless, but doing so in a way that reinforces power difference. To help somebody is to imply that they are lesser and in need of help, rather than an equal community member who is particularly hurt by the system right now. This idea also reminds people of the White Man's Burden and other examples of people claiming to help others but really making things worse.

I could ask my more progressive friends if they think it is good to help people, and they would probably say yes – or at least I could demonstrate that they agree with me given a few minutes of conversation – but that doesn't mean they wouldn't be peeved at hearing "Effective Altruism is about using evidence and careful reasoning to help others the best we can"

I would briefly note that mutual aid is not incompatible with EA to the extent that EA is a question; however, requiring that we be in community with people in order to help them means that we are neglecting the world's poorest people who do not have access to (for example) the communities in expensive private universities.

I think many progressives and others on the left value mutual aid because they see it as more sustainable and genuine and with fewer negative strings attached. I think they are generally fine with aid and helping others as long as they can be shown good evidence that 1) the aid is not going to be used to prevent other positive changes (basically things like exchanging humanitarian aid for continued resource extraction from a region that's worth more than the total aid contributed, or pressuring/requiring a housing justice org to stop organizing tenants to stand up for their rights in exchange for more funding for their shelter initiatives) and 2) Aid is done in a competent manner so that it doesn't get stolen by governments, wasted, or taken by other corrupt actors and 3) respects local wisdom and empowers people to have more of a say over decisions that most affect them. Another example would be conservation efforts that kick indigenous people off their land vs ones that center their practical experience and respect their rights. 

There's a big difference between donating to a food bank and creating the infrastructure for people to organize their own food bank and/or grow their own food of their choosing. The first one is more narrowly focused on food security whereas the latter fits with a broader food justice or food sovereignty approach. I think both are important. Many people believe the latter kind of empowerment initiatives are more sustainable in the long run and less dependent on shifts in funding, even if they're harder to set up initially.  The reason being that they redistribute power, not just resources. To sum it up, something like "Give a man a fish and he will eat for a day; teach a community to fish, and give them a place to do so, and they will eat for generations."

Thanks for your response! I don't think I disagree with anything you're saying, but I definitely think it's hard. That is, the burden of proof for 1, 2, and 3 is really high in progressive circles, because the starting assumption is charity does not do 1, 2, or 3. To this end, simplified messages are easily mis-interpreted. 
I really like this: "The reason being that they redistribute power, not just resources."

Yeah when I was reading it I was  thinking "these are high bars to reach" but I think they cover all the concerns I've heard. Oh glad you liked it! I probably could have said that from the start, now that I think about it. 

A Simpler Version of Pascal's Mugging Background: I found Bostrom’s original piece (https://www.nickbostrom.com/papers/pascal.pdf) unnecessarily confusing, and numerous Fellows in the EA VP Intro Fellowship have also been confused by it. I think we can be more accessible in our ideas. I wrote this in about 30 minutes though, so it's probably not very good. I would greatly appreciate feedback on how to improve it. I also can't decide if it would be useful to have at the end a section of "possible solution" because as far as I can tell, theses solutions are all subject to complicated philosophical debate that goes over my head. So including it might be necessarily too confusing. Might be easiest to provide comments on the Google Doc itself (https://docs.google.com/document/d/1NLfDK7YqPGdYocxBsTX1QMldLNB4B-BvbT7sevPmzMk/edit)

Pascal is going about his day when he is approached by a mugger demanding Pascal’s wallet. Pascal refuses to give over his wallet, at which point the mugger offers the following deal: “Give me your wallet now and tomorrow I will give you twice as much money as is in the wallet now” Pascal: “I have $100 in my wallet, but I don’t think it’s very likely you’re going to keep your promise” Mugger: “What do you think is the probability that I keep my promise and give you the money?” Pascal: “Hm, maybe 1 in a million because you might be some elaborate YouTube prankster” Mugger: “Ok, then you give me your $100 now, and tomorrow I will give you $200 million” Let’s do the math. We can calculate expected value by multiplying the value of an outcome by the probability of that outcome. The expected value of taking the deal, based on Pascal’s stated belief that the mugger will keep their word, is 200,000,000 * 1/(1,000,000) = $200. Whereas, the expected value of not taking the deal is $100 * 1 (certainty) = $100. Pascal should take the deal if he is an expected value maximizing person. Maybe at this point Pascal realizes that the chances of the mugger having 200 million dollars is extremely low. But this doesn’t change the conundrum because the mugger will simply offer more money to account for the lower probability of them following through. For example, Pascal thinks the probability of the mugger having the money decreases the chance of the mugger following through to one in a trillion. Then the mugger offers 200 trillion dollars. The mugger is capitalizing on the fact that everything we know, we know with a probability less than one. We can not be 100% certain that the mugger won’t follow through on their promise, even though we intuitively know they won’t. Extremely unlikely outcomes are still possible.

Pascal: “200 trillion dollars is too much money, in fact I don’t think I would benefit from having any more than 10 million dollars” Pascal is drawing a distinction between expected value (uses units of money) and expected utility (uses units of happiness, satisfaction, other things we find intrinsically valuable), but the mugger is unphased.

Mugger: “Okay, but you do value happy days of life in such a way where more happy days is always better than fewer happy days. It turns out that I’m a wizard and I can grant you 200 trillion happy days of life in exchange for your wallet” Pascal: “It seems extremely unlikely that you’re a wizard, but the amount I value 200 trillion happy days of life is so high that the expected utility is still positive, and greater than what I get from just keeping my $100” Pascal hands his wallet to the mugger but doesn’t feel very good about doing so.

So what’s the moral of this story? -Expected value is not a perfect system for making decisions, because we all know Pascal is getting duped. -We should be curious and careful about how to deal with low probability events with super high or low expected value (like extinction risks). Relatedly, common sense seems to suggest that spending effort on too unlikely scenarios is irrational

Random journaling and my predictions: Pre-Retrospective on the Campus Specialist role.
 Applications for the Campus Specialist role at CEA close in like 5 days. Joan Gass's talk at EAG  about this was really good, and it has led to many awesome, talented people believing they should do Uni group community building full time. 20-50 people are going to apply for this role, of which at least 20 would do an awesome job. 

Because the role is new, CEA is going to hire like 8-12 people for this role; these people are going to do great things for community building and likely have large impacts on the EA community in the next 10 years. Many of the other people who apply will feel extremely discouraged and led on. I'm not sure what they will do, but for the ~10 (or more) who were great fits for the Campus Specialist program but didn't get it, they will do something much less impactful in the next 2 years.

I have no idea what the effects longer-term will be, but definitely not good. Probably some of these people will leave the EA community temporarily because they are confused, discouraged, and don't think their skill set fits well with what employers in the EA community care about right now. 

This is avoidable if CEA expands the number of people they hire and the system for organizing this role. I think the strongest argument against doing so is that the role is fairly experimental and we don't know how it will work out. I think that the upside of having more people in this role totally overshadows the downsides. The downsides seem to mainly be money (as long as you hire competent, agentic people). The role description suggests an impact of counterfactually moving ~10 people per year into high impact careers. I think even if the number were only 5, this role would be well worth it, and my guess is that the next 10 best applicants would still have such an effect (even at less prestigious universities). 

Disclaimer: I have no insider knowledge. I am applying for the Campus Specialist role (and therefore have a personal preference for more people getting the job). I think there is about a 2/3 chance of most of the above problem occurring, and I'm least confident about paragraph 3 (what the people who don't get the role do instead).

The other people who were good fits but weren't hired might do something less impactful over the next two years, but I think it's still unclear whether their career will be less impactful in the longer term. There are lots of jobs with quality training and management that could teach you a lot in the two years you would've been a campus specialist. I would encourage everyone who's applying to be a campus specialist to also apply to some of those jobs, and think carefully about which to pick if offered both.

Some things you could try:

-Testing your fit for a policy/politics career

-Learning the skills you'd need to help run a new EA megacharity

-Working or volunteering as a community organizer

Yes, I agree that this is unclear. Depending on AI timelines, the long-term might not matter too much. To add to your list:

- What do you or others view as talent/skill gaps in the EA community; how can you build those skills/talents in a job that you're more likely to get? (I'm thinking person/project management, good mentoring, marketing skills, as a couple examples)

Thanks for posting this, Aaron! I'm also applying to the role, and your thoughts are extremely well-put and on the mark. 

20-50 people are going to apply for this role, of which at least 20 would do an awesome job. 

I think we have two disagreements here. 

  1. My thought is that over 50 people are going to apply (my expectation is 65+); perhaps this doesn't matter too much (quite a few disappointed people regardless), and I don't think either of us has particularly good evidence for this.
  2. I'm uncertain as to whether 40% (assuming your prediction of 50 applications) would do an "awesome" job. 'Awesome' needs to be defined further here, but, without going into the weeds, I think that a recently graduated person having a fleshed-out entrepreneurial aptitude + charisma + a deep understanding of EA  is extremely rare (see Alex HT's post).

More on the 2nd thought: I'd reckon (high uncertainty) that CEA may struggle to find more than ~12 people like this. This does not imply that there are not far more than 12 qualified people for the job. Primary reasons I think this: a) the short application timeline; b) my uncertainty about the degree of headhunting that's gone on; and c) the fact that a lot of the best community builders I know (this is a limited dataset, however) already have jobs lined up.  All of this depends on who is graduating this year and who is applying, of course.

Hey Ed, thanks for your response. I have no disagreement on 1 because I have no clue what the upper end of people applying is – simply that it's much higher than the number who will be accepted and the number of people (I think) will do a good job. 

2. I think we do disagree here. I think these qualities are relatively common in the CBers and group organizers I know (small sample). I agree that short app timeline will decrease the number of great applicants applying, also unsure about b, c seems like the biggest factor to me. 

Probably the crux here is what proportion of applicants have the skills you mention, and my guess is ⅓ to ⅔, but this is based on the people I know which may be higher than in reality.

Awesome - thanks for the response. Yes, I agree with the crux (this also may come from different conceptions of the skills themselves). I'll message you!

Hey I applied too! Hopefully at least one of us gets it. I think they probably got more than 50 applications, so it almost starts to become a lottery at that point if they only have a few spots and everyone seems like they could do it well. Or maybe that's just easier for me to think haha. 

I think conceptualizing job hunts like this for very competitive positions is often accurate and healthy fwiw

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A