Cause prioritization
Cause prioritization
Identifying and comparing promising focus areas for doing good

Quick takes

3
18d
5
Super sceptical probably very highly intractable thought that I haven't done any research on: There seem to be a lot of reasons to think we might be living in a simulation besides just Nick Bostrom's simulation argument, like: * All the fundamental constants and properties of the universe are perfectly suited to the emergence of sentient life. This could be explained by the Anthropic principle, or it could be explained by us living in a simulation that has been designed for us. * The Fermi Paradox: there don't seem to be any other civilizations in the observable universe. There are many explanations for the Fermi Paradox, but one additional explanation might be that whoever is simulating the universe created it for us, or they don't care about other civilizations, so haven't simulated them. * We seem to be really early on in human history. Only about 60 billion people have ever lived IIRC but we expect many trillions to live in the future. This can be explained by the Doomsday argument - that in fact we are in the time in human history where most people will live because we will soon go extinct. However, this phenomenon can also be explained by us living in a simulation - see next point. * Not only are we really early, but we seem to be living at a pivotal moment in human history that is super interesting. We are about to create intelligence greater than ourselves, expand into space, or probably all die. Like if any time in history were to be simulated, I think there's a high likelihood it would be now.  If I was pushed into a corner, I might say the probability we are living in a simulation is like 60%, where most evidence seems to point towards us being in a simulation. However, the doubt comes from the high probability that I'm just thinking about this all wrong - like, of course I can come up with a motivation for a simulation to explain any feature of the universe... it would be hard to find something that doesn't line up with an explanation that the simu
3
20d
Is there a running list of small, impactful & very capacity-constrained giving opportunities somewhere?
28
1mo
5
Gavi's investment opportunity for 2026-2030 says they expect to save 8 to 9 million lives, for which they would require a budget of at least $11.9 billion[1]. Unfortunately, Gavi only raised $9 billion, so they have to make some cuts to their plans[2]. And you really can't reduce spending by $3 billion without making some life-or-death decisions. Gavi's CEO has said that "for every $1.5 billion less, your ability to save 1.1 million lives is compromised"[3]. This would equal a marginal cost of $1,607 $1,363 per life saved, which seems a bit low to me. But I think there is a good chance Gavi's marginal cost per life saved is still cheap enough to clear GiveWell's cost-effectiveness bar. GiveWell hasn't made grants to Gavi, though. Why? ---------------------------------------- 1. https://www.gavi.org/sites/default/files/investing/funding/resource-mobilisation/Gavi-Investment-Opportunity-2026-2030.pdf, pp. 20 & 43 ↩︎ 2. https://www.devex.com/news/gavi-s-board-tasked-with-strategy-shift-in-light-of-3b-funding-gap-110595 ↩︎ 3. https://www.nature.com/articles/d41586-025-02270-x ↩︎
9
1mo
4
Indoor tanning is really bad for people's health; it significantly increases one's risk of getting skin cancer.[1] Many countries already outlaw minors from visiting indoor tanning salons. However, surprisingly, there are only two countries, Australia and Brazil, that have banned indoor tanning for adults, too. I think that doing policy advocacy for a complete ban on indoor tanning in countries around the world has the potential to be a highly cost-effective global health intervention. Indoor tanning ban policy advocacy seems to check all three boxes of the ITN framework: it is highly neglected; it affects many people (indoor tanning is surprisingly popular: over 10 percent of adults around the world have tanned indoors[2]), and thus has the potential to have a big impact; and also, I think it could be quite tractable (passing laws is never easy, but is should be doable, because the indoor tanning lobby appears to be much less powerful than, say, the tobacco or alcohol lobbies). ---------------------------------------- 1. https://www.aad.org/public/diseases/skin-cancer/surprising-facts-about-indoor-tanning ↩︎ 2. https://www.aad.org/media/stats-indoor-tanning ↩︎
-1
1mo
FarmKind is openly hostile towards veganism, which makes no sense. See this stunt here: https://www.gbnews.com/news/veganuary-actvist-meat-eating-campaign and this social media video in which they refer to people being "tricked into going vegan": https://www.instagram.com/p/DQuPg0VjMJf/ Obviously discouraging veganism is completely antithetical to reducing animal suffering, because: the vegan movement is the best pool we have for effective animal advocates; opposing veganism while ostensing to advocate for animals sends a weak moral message that reduces moral pressure on industrial farming; being non-vegan = funding industrial farming. What is the point of this?
3
1mo
Much of the community’s focus is rightly on technical alignment and governance. However, there seems to be a significant blind spot regarding societal adaptation, specifically, how we raise and educate the next generation. Our current education model is predicated on a learn skills to provide economic value loop. When transformative AI disrupts this model, we risk creating a generation that is not only economically displaced but fundamentally disenfranchised and without a clear sense of purpose. Historically, large populations of disenfranchised young people have been a primary driver of societal collapse and political volatility. If the transition to a post-AGI world is chaotic due to human unrest, our ability to manage technical safety drops significantly. Is anyone seriously funding or working on how education/raising children needs to change to fit with an AGI era? It seems like ensuring the next generation is psychologically and philosophically prepared for a world of transformative AI is a necessary prerequisite for a stable transition.        
16
1mo
1
Incidentally, ‘flipping non-EA jobs into EA jobs’ and ‘creating EA jobs’ both seem much more impactful than ‘taking EA jobs’. That could be e.g. taking an academic position that otherwise wouldn’t have been doing much and using it to do awesome research / outreach that others can build on, or starting an EA-aligned org with funding from non-EA sources, like VCs. (excerpt from https://lydianottingham.substack.com/p/a-rapid-response-to-celeste-re-e2g) 
29
2mo
6
* Re the new 2024 Rethink Cause Prio survey: "The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views. [“Defer to experts”]" 3% strongly agree, 18% somewhat agree, 35% somewhat disagree, 15% strongly disagree. * This seems pretty bad to me, especially for a group that frames itself as recognizing intellectual humility/we (base rate for an intellectual movement) are so often wrong. * (Charitable interpretation) It's also just the case that EAs tend to have lots of views that they're being contrarian about because they're trying to maximize the the expected value of information (often justified with something like: "usually contrarians are wrong, but if they are right, they are often more valuable for information than average person who just agrees"). * If this is the case, though, I fear that some of us are confusing the norm of being contrarian instrumental reasons and for "being correct" reasons.  Tho lmk if you disagree. 
Load more (8/89)