Recent Discussion

 After recent events in Ukraine, Samotsvety convened to update our probabilities of nuclear war. In March 2022, at the beginning of the Ukraine war, we were at ~0.01% that London would be hit with a nuclear weapon in the next month. Now, we are at ~0.02% for the next 1-3 months, and at 16% that Russia uses any type of nuclear weapon in Ukraine. At the end of the post, we reflect on the size of our update, and what this means about our accuracy. 

Expected values are more finicky and more person-dependent than probabilities, and readers are encouraged to enter their own estimates, for which we provide a template. We’d guess that readers would lose 2 to 300 hours by staying in London in the next 1–3 months,...

There is internal discord within Samotsvety about the degree to which the magnitude of the difference between our current and former probabilities is indicative of a lack of accuracy. We Samotsvety updated our endline monthly probability of London being hit with a nuclear weapon by ~5x (0.055% vs 0.067 * 0.18 = 0.012%).

This number is before you made the correction, is that right? Can you edit this to highlight the fact that it (aiui) does not apply any more?

2Guy Raveh10h
Much more informative - in accordance with what my other comment saying I find the reasoning in the OP much more informative than the numbers. Feels honest rather than harsh. But thanks for the sympathy, and it's nice to know where the votes come from. Obviously, I also strongly disagree with you :)
So I agree that having a wide spread is worrying. At the same time, I'd expect the aggregate to be better if it incorporates different perspectives, even if they don't come to agree.

Hi everyone, 


I have been interested in EA for quite some time but up until now just sticked to donating and haven’t actively worked on a project so far.


Now I recently stumbled upon an idea based on the fact that more and more people in extreme poverty have access to their own smartphones as they are getting more affordable. The idea is basically as if GiveDirectly and DonorSee had a baby which is an app that connects donors and people in need directly that allows them to communicate via chat and audio/video call so they can get to know each other and become friends and enables donors to send money directly to his match through the app which uses the the mobile banking systems prevalent in many African...

This sounds interesting. One worry I have would be preventing any kind of exploitation of recipients in exchange for support. 

This is a neat idea. I would recommend testing it out by finding say 10 GiveDirectly donors who'd be willing to try this and matching them to donees, using spreadsheets as needed, without making the app first.

Cross-posted from Bessie O'Dell - Blog.

According to a flagship Effective Altruism (EA) organisation, you have 80,000 hours in your career over a lifetime: 40 hours per week, 50 weeks per year, for 40 years. But does this hold true for women? And if not, what are the implications of this (and related assumptions) for the EA research community and the practical EA community?


Executive Summary

  • Effective Altruism is a movement centred around facilitating people doing the most amount of good.
  • A number of organisations and groups are dedicated to offering careers advice guided by EA principles. This includes 80,000 Hours and Giving What We Can.
  • Women* are often not explicitly considered when EA-focused organisations provide life & careers advice. The default for earning potential (and contributing work hours) appears centred on

I can't figure out why this didn't get more traction. This post seems extremely relevant and brought up well considered points that I'm surprised I've never encountered before. This subject seems fundamental to life changing career decisions, and highly relevant to both EA earning to give and EA career impacts. I also can't spot any surface level presentation reasons it might have gotten overlooked or prematurely dismissed.

Edit: Ah, I think what happened is it was evaluated by the suggested actions when scrolling to see the outcomes/results. I am also much... (read more)

The "Career progression and earning potential" section was so difficult to read; I know the point is to raise awareness about expectation-setting and not viewing men as default, but the point that sticks in my mind is the old "having kids is a career killer." As a woman on the fence about having kids, the thought of literally making the world worse (not saving 6.8 lives) for something that would also damage my career awful. Calling this information to others' attention needs to be done with care to avoid the sort of "women are less valuable EAs according to math" conclusion. I know this isn't the conclusion, and I know we care about the same problem (women doing well in EA). But I found this post profoundly discouraging.
Strong upvoted. This got me to ponder the thought experiment of imagining an EA community that assumed members and interested people were female by default. I do think that EA content would look somewhat different in that world, primarily in addressing questions about kids. I'd expect that advice and discussion about whether, when, and how to have and raise kids would be a very prominent topic. I might expect talent-contained EA orgs to try and differentiate themselves through perks like on-site childcare. I might also expect more weird and out there stuff that's targeted at related questions, like maybe in that world, you'd see posts arguing that the most impactful career you can have is being a competent and value-aligned nanny for another EA. Imagining that world gives me a sense of how the current world looks somewhat male by default, and where we might look to change that.
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some question posts that could use more answers.

epistemic status: I am fairly confident that the overall point is underrated right now, but am writing quickly and think it's reasonably likely the comments will identify a factual error somewhere in the post. 

Risk seems unusually elevated right now of a serious nuclear incident, as a result of Russia badly losing the war in Ukraine. Various markets put the risk at about 5-10%, and various forecasters seem to estimate something similar. The general consensus is that Russia, if they used a nuclear weapon, would probably deploy a tactical nuclear weapon on the battlefield in Ukraine, probably in a way with a small number of direct casualties but profoundly destabilizing effects. 

A lot of effective altruists have made plans to leave major cities if Russia uses a nuclear weapon,...

Re: COVID, the correct course of action (unless one was psychic) was to be extremely paranoid at the start (trying for total bubbling, sterilizing outside objects, etc) because the EV was very downside-skewed—but as more information came in, to stop worrying about surfaces, start being fine with spacious outdoor gatherings, get a good mask and be comfortable doing some things inside, etc.

That is, a good EA would have been faster than the experts on taking costly preventative acts and faster than the experts on relaxing those where warranted.

Some actual EAs... (read more)

I'd be a bit surprised if EAs were even good at surviving post-apocalypse. We've spent all this time learning how best to live in a civilization... we're not preppers, we're not experts in agriculture or building water wells or keeping raiders away from food stashes, I'm not sure how we'll communicate without the internet (but Starlink may well survive), and does ALLFED have any solutions to offer within the next year?
Beating the traffic perhaps; getting stuck in your car trying to leave SF is worse than sheltering in your SF basement.

I've found lots of examples of outstanding physical performance under a vegan diet, but I've been unable to find examples of bold theoretical breakthroughs being made under a vegan diet (the closest example I could find was Ramanujan, but there were other things about Ramanujan that suggest that there's absolutely no way most of us could live the same life and then end up in the same place), and my own experience has been really discouraging. After about 10 days on a vegan diet, regardless of my energy levels or legible performance metrics, I'll pretty reliably stop being able to, or stop being interested in progressing original ideas.

There are lots of possible exits here: Vegans and inventors were rare until very recent history, we shouldn't expect to...


I would like to thank Michael Plant, Matt Lerner and Rosie Bettle for their helpful comments and advice.


Understanding the relationship between wellbeing and economic growth is a topic that is of key importance to Effective Altruism (e.g. see Hillebrandt and Hallstead, Clare and Goth). In particular, a key disagreement regards the Easterlin Paradox; the finding that happiness[1] varies with income across countries and between individuals, but does not seem to vary significantly with a country’s income as it changes over time. Michael Plant recently wrote an excellent post summarizing this research. He ends up mostly agreeing with Richard Easterlin’s latest paper arguing that the Easterlin Paradox still holds; suggesting that we should look to approaches other than economic growth to boost happiness. I agree with Michael Plant...

1Vadim Albinsky2h
Michael, thanks so much for really engaging with the post. I think we are now very close in our big-picture views on the subject, but would love to continue the discussion on the more interesting areas of disagreement (I will respond to those points below). I agree that we don’t have enough data to say if the Easterlin paradox holds. I am also somewhat hesitant about prioritizing economic growth as an intervention, although my concerns are less about effect sizes directly, and more about whether generating growth is tractable, and whether potential interventions carry large risks. 1. I agree with Stephen Clare’s response that we can try to be more Bayesian here. I think it’s reasonable to start with a prior based on the very statistically significant cross-sectional correlation between a country’s GDP and its well-being. In order to believe that this correlation does not generalize to changes in one country across time, we would need to believe that Ethiopia could grow to have the current US GDP but remain as unhappy as a low income country. That would make it an extreme outlier in the cross-sectional data, and would imply that there was some kind of idiosyncratic problem with the country (and I don't think the argument about people comparing themselves to peers deals with this problem). So I think there is some burden of proof on providing evidence that there actually is a paradox. If we start with a prior based on the cross sectional data, we would initially expect a 0.5 life satisfaction point increase for an income doubling. Then we can update on HLI’s meta-analysis results, suggesting that the impacts of cash transfers only have an impact that is a quarter of that. So now we would believe that the impact is somewhere between those two values. Then we get Easterlin and O’Connor’s regression results, which are not in themselves statistically significant. However, they are pretty much the sa
  1. I agree that we have very little evidence so far about the tractability of economic growth interventions. I just think that Easterlin and O’Connor’s work should not make us think that economic growth interventions are any less useful than we would have otherwise thought. Since these sorts of regressions seem to show smaller impacts for health and pollution than GDP, maybe they should (very very slightly) update us towards thinking a little more of economic growth interventions than whatever our prior beliefs were.

  2. I agree that all of the increases in

... (read more)
7Stephen Clare14h
On 3., is it worth trying to be more Bayesian? Yes, we face data limitations because there's <200 countries in the world, and the data from most countries is pretty crap. But it feels intuitive (to me, at least) that growth should have some positive effect on happiness, and we have some data from areas, like cash transfers, that suggests more money makes people a bit more happy. And then Vadim suggests that the data we do have suggests a small but slightly positive effect of growth on happiness. So my belief that the studies he refers to are picking up on a real effect rather than pure chance is higher than it would be based on the study's error bars alone. Personally, I find 7. a compelling response to 5. and 6. We don't need to imagine reductio scenarios of counterfactual effects lasting for a 500 years or 1000x increases in world GDP because even short-lived growth accelerations have large aggregate effects because they affect so many people. Relatedly, I think in practice growth interventions in practice will look less like "increasing economic growth by 0.0001 percentage points" and more like x% chance of sparking a growth acceleration for years or decades a la Pritchett et al. 2016 []. What kind of evidence you refer to in 8. would actually change your mind? Why does expected value reasoning not work here?

I'd like to thank John Mori and Mckay Jensen for their helpful feedback. Any errors are my own.

Note: I am not an expert on this topic, and I have little background knowledge on corporate governance. Please be skeptical and point out mistakes.

This post is about doing good by changing the behavior of for-profit companies. The main examples of this within EA are corporate animal welfare improvements. For popular EA causes, plenty of other possibilities exist, such as improving the labor standards of companies operating in poor countries or improving the safety standards of AI and biotechnology companies[1].

One possible way to bring about corporate behavior change is through shareholder activism. Shareholder activism has been covered in a few recent EA articles. Relative to those articles, the contribution of this...

This is a really good article, thanks sbehmer.

I have actually been working on an article for the EA forum on shareholder activism, which I expect to be able to post within a the next week or so. I work in a related field, so I have seen and heard of various examples of corporate engagement happening.

I think I am much more bullish on shareholder activism than you are, specifically, I believe the "ReturnLoss" on share prices due to proxy campaigns can, in a significant number of cases, be negative, as it was in the Engine no. 1 proxy fight (the share price a... (read more)

Hi friends,

I am part of a newly-formed Lab/think-tank whose purpose is to come up with impactful EA ideas, source for funds for them and execute them. Here is one of the ideas we have that needs funding:


IDEA: A Question & Answer Website for Effective Altruism

Think: Stackexchange or for EA



  • Will be especially helpful for people new to the community and EAs needing help and guidance
  • It will act as a crowdsourced FAQ, Support hub and repository for EA knowledge to complement this forum and other resources
  • Will further deepen conversations surrounding EA topics
  • Can become a valuable EA Infrastructure resource i.e a living, perpetually self-updating body of knowledge on EA
  • Nothing similar currently exists

Fund needed: 21,000 USD
Delivery (beta version): 3 Months

I will be happy to provide any more details on request. Any interested donor may DM me or for quicker response email me on 

This a submission for the Future Fund worldview prize addressing its two concerns:
1. Loss of control to AI systems
2. Concentration of power (with the help of AI)

Now I understand why we would be genuinely concerned about the possibilities above. Or why we would think that "with the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease." Our hopes and our worries are different sides of the same coin and I would suggest that in case of AI, both our hopes and worries might be, for the most part, misplaced. The reasons for that, however, have more to do with our own human nature than with AI itself.

See, the problem lies with us, humans. As things are,...