Quick takes

Set topic
Frontpage
'Why I Donate' Week
Global health
Animal welfare
Existential risk
12 more

A semi-regular reminder that anybody who wants to join EA (or EA adjacent) online book clubs, I'm your guy.

Copying from a previous post:

I run some online book clubs, some of which are explicitly EA and some of which are EA-adjacent: one on China as it relates to EA, one on professional development for EAs, and one on animal rights/welfare/advocacy. I don't like self-promoting, but I figure I should post this at least once on the EA Forum so that people can find it if they search for "book club" or "reading group." Details, including links for joining each

... (read more)

I came to one, it was great! Thanks Joseph for your tireless organizing. 

In Developmenet, a global development-focused magazine founded by Lauren Gilbert, has just opened their first call for pitches. They are looking for 2-4k word stories about things happening in the developing world. They pay 2k USD per article, submissions close Jan 12. More info here

Reading Will's post about the future of EA (here) I think that there is an option also to "hang around and see what happens". It seems valuable to have multiple similar communities. For a while I was more involved in EA, then more in rationalism. I can imagine being more involved in EA again.

A better earth would build a second suez canal, to ensure that we don't suffer trillions in damage if the first one gets stuck. Likewise, having 2 "think carefully about things movements" seems fine. 

It hasn't always felt like this "two is better than one" feeling... (read more)

Showing 3 of 6 replies (Click to show all)

There are multiple adjacent cults, which I've said in the past.

What do you think the base rate for cult formation is for a town or community of that size? Seems like LessWrong is far, far above the base rate, maybe even by orders of magnitude.

They were also early to crypto, early to AI, early to Covid.

I don’t think any of these are particularly good or strong examples. A very large number of people were as early or earlier to all of these things as the LessWrong community.

For instance, many people were worried about and preparing for covid in early 2020 be... (read more)

4
David Mathers🔸
I think on the racism fron Yarrow is referring to the perception that the reason Moskowtiz won't fund rationalist stuff is because either he thinks that a lot of rationalist believe Black people have lower average IQs than whites for genetic reasons, or he thinks that other people believe that and doesn't want the hassle. I think that belief genuinely is quite common among rationalists, no? Although, there are clearly rationalists who don't believe it, and most rationalists are not right-wing extremists as  far as I can tell. 
4
Yarrow Bouchard 🔸
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism: "Human biodiversity (Part 2: Manifest)" (June 27, 2024) "Human Biodiversity (Part 7: LessWrong)" (April 18, 2025) My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camel’s back.

Londoners!
@Gemma 🔸 is hosting a co-writing session this Sunday, for people who would like to write "Why I Donate" posts. The plan is to work in poms, and publish something during the session. 

Idea for someone with a bit of free time: 

While I don't have the bandwidth for this atm, someone should make a public (or private for, say, policy/reputation reasons) list of people working in (one or multiple of) the very neglected cause areas — e.g., digital minds (this is a good start), insect welfare, space governance, AI-enabled coups, and even AI safety (more for the second reason than others). Optional but nice-to-have(s): notes on what they’re working on, time contributed, background, sub-area, and the rough rate of growth in the field (you pr... (read more)

What a wonderful idea! Mayank referred me over to this post, and I think EA at UIUC might have to hop on this project. I'll see about starting something in the next month or so and sharing a link to where I'm compiling things in case anyone else is interested in collaborating on this. Or, it's possible an initiative like it already exists that I'll stumble upon while investigating (though such a thing may well be outdated).

The mental health EA cause space should explore more experimental, scalable interventions, such as promoting anti-inflammatory diets at school/college cafeterias to reduce depression in young people, or using lighting design to reduce seasonal depression. What I've seen of this cause area so far seems focused on psychotherapy in low-income countries. I feel like we're missing some more out-of-the-box interventions here. Does anyone know of any relevant work along these lines? 

A few points:

  1. There is still a lot of progress to be made in low-income country psychotherapy, which I think many EAs find counterintuitive. StrongMinds and Friendship Bench could both be about 5× cheaper, and have found ways to get substantially cheaper every year for the past half decade or so. At Kaya Guides, we’re exploring further improvements and should share more soon.
  2. Plausibly, you could double cost-effectiveness again if it were possible to replace human counsellors with AI in a way that maintained retention (the jury is still out here).
  3. The Hap
... (read more)
2
Yarrow Bouchard 🔸
My gut feeling based on knowledge, reasoning, and experience is that the low-hanging fruit like diet and lighting is quite low-impact and probably has like low to middling cost-effectiveness — but I haven’t done any math, nor any experiments. If I had research bucks to spend on experimental larks, I would try to push the psychotherapeutic frontier. For example, I might fund grounded theory research into depression. Or I might do a clinical trial on the efficacy of schema therapy for depression — there have been some promising results, but not many studies. I think Johann Hari’s core point is correct — or at least a core point can be extracted from what he’s saying that is correct. Anti-depressants are very helpful for some people and moderately helpful for most people. Medical clinics that give ketamine to patients with treatment-resistant depression are helpful. Treatments that stimulate the brain with magnets and electricity are helpful. Neurofeedback may be helpful. But what all these approaches have in common is they’re trying to treat the brain like the engine in a car. This kind of argument often gets mixed in with people who say that anti-depressants don’t work or are against them for some reason. Or people who advocate for non-evidence-based, woo woo "treatments". But that’s not what I’m saying. Everyone who’s depressed should talk to a doctor about anti-depressants because the evidence for their efficacy is good and, even better, the side-effects for most people most of the time are fairly minor (providing they don’t mix them with the wrong drugs or substances), so the risk of trying them is low. And if one anti-depressant doesn’t work, the standard approach doctors will take is try 3-5 (over time, not all at once), to maximize the chance of one of them working. Other treatments like medical ketamine may be helpful or even life-changing for some people. But I also think pharmacological and other biologistic approaches only take us so far. Depression is
4
NickLaing
i think this is a good idea, but perhaps better excecutrd even by "non mental health" people. if your expertise is in psychotherapy why ditch that enormous competitive advantage? i also think the evidence base on this stuff isn't yet quite there? but I'm not up to date...

I live in Australia, and am interested in donating to the fundraising efforts of MIRI and Lightcone Infrastructure, to the tune of $2,000 USD for MIRI and $1,000 USD for Lightcone. Neither of these are tax-advantaged for me. Lightcone is tax advantaged in the US, and MIRI is tax advantaged in a few countries according to their website

Anyone want to make a trade, where I donate the money to a tax-advantaged charity in Australia that you would otherwise donate to, and you make these donations? As I understand it, anything in Effective Altruism Austral... (read more)

Can confirm, and happy to vouch.

Tax-effective Australian charities and funds:

  • Against Malaria Foundation
  • Deworm the World Initiative (led by Evidence Action)
  • Effective Altruism Australia
  • GiveDirectly
  • Giving What We Can
  • Helen Keller International
  • Malaria Consortium
  • New Incentives
  • One Acre Fund
  • StrongMinds
  • Unlimit Health (formerly SCI)
  • All Grants Fund by GiveWell
  • Top Charities Fund by GiveWell
  • Environment Fund by Giving Green

What are some resources for doing their own GPR that is longer than the couple months recommended in this 80k article but shorter than a lifetime's worth of work as a GP researcher?

This is an annoying feature of search:

Bella
50
16
15
4
3

EAs are trying to win the "attention arms race" by not playing. I think this could be a mistake.

  • The founding ideas and culture of EA was created and “grew up” in the early 2010s, when online content consumption looked very different.
    • We’ve overall underreacted to shifts in the landscape of where people get ideas and how they engage with them.
    • As a result, we’ve fallen behind, and should consider making a push to bring our messaging and content delivery mechanisms in line with 2020s consumption.
  • Also, EA culture is dispositionally calm, rational, and dry.
    • Th
... (read more)
Showing 3 of 10 replies (Click to show all)

My much belated reply! On why I think short-form social media like Twitter and TikTok are good money chasing after bad, i.e., the medium is so broken and ill-designed in these cases, I think the best option is to just quit these platforms and focus on long-form stuff like YouTube, podcasts, blogs/newsletters (e.g. Medium, Substack), or what-have-you.

The most eloquent critic of Twitter is Ezra Klein. An from a transcript of his podcast, an episode recorded in December 2022:

OK, Elon Musk and Twitter. Elon Musk — let me start with the part of this that I kn

... (read more)
1
QuantumForest
I do not understand why so many disagree with your take. I think it can both be true that EA conferences, the forum, long-form YouTube and books etc are great and keep the community active and interesting but that we also miss out on reaching new young people over the long term. I think that there are actually two reasons to be active in more channels: 1. Reach new people and keep the community alive over the long term. I mean people need to find out about the ideas and cause areas somewhere right? If they are not easily discoverable where people get their information and interests then that will be harder. Before people decide to read a whole book about something or attend a conference they need to be interested or curious about it in the first place. 2. Get EA ideas and cause areas out there in the general conversation to hopefully have some effect on policy and societal norms etc. I think there are lots of EA ideas or cause areas that are “memable” and interesting. It is just a matter of how one frames it. Sure it will most likely not be the most popular stuff but it could influence relevant discussions on economics or politics or whatever.  I think graphs like this one can be pretty mind-blowing[1] and making more people aware of these effects could have a good effect on the margin. I think the idea portrayed by the graph is very powerful: that one’s effort or giving could help 100 times more people if applied to the right target. I'm sure this concept could be shown in many other ways too. To take another random example of nerdy “non-EA” meme I have seen a few times is this graph. It sometimes appears in discussions regarding fusion power. I have no idea how accurate[2] it is but the main point is that is a pretty complex “meme” but in the right contexts it is relevant and could influence discussions.  Some other examples I can think about are the web comics xkcd or SMBC where particular strips get widely shared many years after they were published. There
4
Yarrow Bouchard 🔸
Is there any high-quality evidence or even good anecdotes about how successful creators are at getting people off the platform? I only know anecdotally things like, e.g., Hank Green complaining about the algorithm aggressively downranking his posts about his charity store. I also feel like I’ve heard comedians say that Twitter is fine with their jokes, but when they want to promote a show — for many of them, the main purpose of being on Twitter — their followers barely see those tweets. Also, when I used TikTok, I noticed a few sketch comedy creators who had large followings on TikTok but had barely any conversions to YouTube. I think probably the algorithm is behind a lot of this, but also I think probably most users don’t want the friction of clicking through to another platform. My cynical take on this is that people scroll Twitter and TikTok to numb out and engage their limbic system, not their prefrontal cortex, so it’s a losing game for all involved. @Bella that’s part of the answer I owe you. I will give the other part soon. 

Rate limiting on the EA Forum is too strict. Given that people karma downvote because of disagreement, rather than because of quality or civility — or they judge quality and/or civility largely on the basis of what they agree or disagree with — there is a huge disincentive against expressing unpopular or controversial opinions (relative to the views of active EA Forum users, not necessarily relative to the general public or relevant expert communities) on certain topics.

This is a message I saw recently:

You aren't just rate limited for 24 hours once you fal... (read more)

Showing 3 of 15 replies (Click to show all)
6
Thomas Kwa
Claude thinks possible outgroups include the following, which is similar to what I had in mind
2
Yarrow Bouchard 🔸
a) I’m not sure all of those count as someone who would necessarily be an outsider to EA (e.g. Will MacAskill only assigns a 50% probability to consequentialism being correct, and he and others in EA have long emphasized pluralism about normative ethical theories; there’s been an EA system change group on Facebook since 2015 and discourse around systemic change has been happening in EA since before then) b) Even if you do consider people in all those categories to be outsiders to EA or part of "the out-group", us/them or in-group/out-group thinking seems like a bad idea, possibly leading to insularity, incuriosity, and overconfidence in wrong views  c) It’s especially a bad idea to not only think in in-group/out-group terms and seek to shut down perspectives of "the out-group" but also to cast suspicion on the in-group/out-group status of anyone in an EA context who you happen to disagree with about something, even something minor — that seems like a morally, subculturally, and epistemically bankrupt approach 
  • You're shooting the messenger. I'm not advocating for downvoting posts that smell of "the outgroup", just saying that this happens in most communities that are centered around an ideological or even methodological framework. It's a way you can be downvoted while still being correct, especially from the LEAST thoughtful 25% of EA forum voters
  • Please read the quote from Claude more carefully. MacAskill is not an "anti-utilitarian" who thinks consequentialism is "fundamentally misguided", he's the moral uncertainty guy. The moral parliament usually recommends actions similar to consequentialism with side constraints in practice.

I probably won't engage more with this conversation.

Here's some quick takes on what you can do if you want to contribute to AI safety or governance (they may generalise, but no guarantees). Paraphrased from a longer talk I gave, transcript here

  • First, there’s still tons of alpha left in having good takes.
    • (Matt Reardon originally said this to me and I was like, “what, no way”, but now I think he was right and this is still true – thanks Matt!)
    • You might be surprised, because there’s many people doing AI safety and governance work, but I think there’s still plenty of demand for good takes, and
... (read more)

EA Connect 2025: Personal Takeaways

Background

I'm Ondřej Kubů, a postdoctoral researcher in mathematical physics at ICMAT Madrid, working on integrable Hamiltonian systems. I've engaged with EA ideas since around 2020—initially through reading and podcasts, then ACX meetups, and from 2023 more regularly with Prague EA (now EA Madrid after moving here). I took the GWWC 10% pledge during the event.

My EA focus is longtermist, primarily AI risk. My mathematical background has led me to take seriously arguments that alignment of superintelligent AI may face fund... (read more)

3
Mo Putera
Aside: wow, the slide presentation you linked to above is both really aesthetically pleasing and has great content, thanks for sharing :) 

Rmk: Slides are for Joey's presentation, not my own, but I do not think he would mind sharing them.

Most of the talks had them availeble.

3
Joris 🔸
Congrats on taking the pledge Ondřej!

A rule of thumb that I follow for generating data visualizations: One story = one graph

  • The best visualizations are extremely simple and easy to read: e.g. a line or bar chart that tells you exactly what you care about
  • If you are struggling to figure out what to visualize, zoom out and ask yourself: what story are you trying to tell? Once you have clarity on that, figure out the simplest way to illustrate this.
  • If you have multiple stories to tell, make multiple graphs :)

Some made up stories and solutions:

  • Total engagement hours steadily went down over this ye
... (read more)

Great rule of thumb :) I'm sometimes knee-deep in chartmaking before I realise I don't actually know exactly what I want to communicate.

Tangentially reminded me of Eugene Wei's suggestion to "remove the legend", in an essay that also attempted to illustrate how to implement Ed Tufte's advice from his cult bestseller The Visual Display of Quantitative Information

I'd also like to signal-boost the excellent chart guides from storytelling with data.

They should call ALLFED's research "The Recipice".

I wrote a short intro to stealth (the radar evasion kind). I was irritated by how bad existing online introductions are, so I wrote my own!

I'm not going to pretend it has direct EA implications. But one thing that I've updated more towards in the last few years is how surprisingly limited and inefficient the information environment is. Like obvious concepts known to humanity for decades or centuries don't have clear explanations online, obvious and very important trends have very few people drawing attention to them, you can just write the best book review... (read more)

  • Re the new 2024 Rethink Cause Prio survey: "The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views. [“Defer to experts”]" 3% strongly agree, 18% somewhat agree, 35% somewhat disagree, 15% strongly disagree.
    • This seems pretty bad to me, especially for a group that frames itself as recognizing intellectual humility/we (base rate for an intellectual movement) are so often wrong.
    • (Charitable interpretation) It's also just the case that EAs tend to have lots of views that they're being contrarian about because the
... (read more)
Showing 3 of 6 replies (Click to show all)
4
Arepo
I agree with Yarrow's anti-'truth-seeking' sentiment here. That phrase seems to primarily serve as an epistemic deflection device indicating 'someone whose views I don't want to take seriously and don't want to justify not taking seriously'. I agree we shouldn't defer to the CEO of PETA, but CEOs aren't - often by their own admission - subject matter experts so much as people who can move stuff forwards. In my book the set of actual experts is certainly murky, but includes academics, researchers, sometimes forecasters, sometimes technical workers - sometimes CEOs but only in particular cases - anyone who's spent several years researching the subject in question.  Sometimes, as you say, they don't exist, but in such cases we don't need to worry about deferring to them. When they do, it seems foolish to not to upweight their views relative to our own unless we've done the same, or unless we have very concrete reasons to think they're inept or systemically biased (and perhaps even then).

Yeah, while I think truth-seeking is a real thing I agree it's often hard to judge in practice and vulnerable to being a weasel word.

Basically I have two concerns with deferring to experts. First is that when the world lacks people with true subject matter expertise, whoever has the most prestige--maybe not CEOs but certainly mainstream researchers on slightly related questions-- will be seen as experts and we will need to worry about deferring to them.

Second, because EA topics are selected for being too weird/unpopular to attract mainstream attention/fund... (read more)

1
Yarrow Bouchard 🔸
If I want to know what “utilitarianism” means, including any disagreements among scholars about the meaning of the term (I have a philosophy degree, I have studied ethics, and I don’t have the impression there are meaningful disagreements among philosophers on the definition of “utilitarianism”), I can find this information in many places, such as: * The Stanford Encyclopedia of Philosophy * Encyclopedia Britannica * Wikipedia * The book Utilitarianism: A Very Short Introduction co-authored by Peter Singer and published by Oxford University Press * A textbook like Normative Ethics or an anthology like Ethical Theory * Philosophy journals * An academic philosophy podcast like Philosophy Bites * Academic lectures on YouTube and Crash Course (a high-quality educational resource) So, it’s easy for me to find out what “utilitarianism” means. There is no shortage of information about that. Where do I go to find out what “truth-seeking” means? Even if some people disagree on the definition, can I go somewhere and read about, say, the top 3 most popular definitions of the term and why people prefer one definition over the other?  It seems like an important word. I notice people keep using it. So, what does it mean? Where has it been defined? Is there a source you can cite that attempts to define it? I have tried to find a definition for “truth-seeking” before, more than once. I’ve asked what the definition is before, more than once. I don’t know if there is a definition. I don’t know if the term means anything definite and specific. I imagine it probably doesn’t have a clear definition or meaning, and that different people who say “truth-seeking” mean different things when they say it — and so people are largely talking past each other when they use this term.  Incidentally, I think what I just said about “truth-seeking” probably also largely applies to “epistemics”. I suspect “epistemics” probably either means epistemic practices or epistemology, but it’s not

I have the impression that the most effective interventions, especially in global health/poverty, are usually temporary, in the sense that you need to keep reinvesting regularly, usually because the intervention provides a consumable good; for example malaria chemoprevention: it needs to be provided yearly. In contrast, solutions that seem more permanent in the long-term (e.g. a hypothetical malaria vaccination, or building infrastructure), are typically much less cost-effective on the margin because of their high cost.

How do we balance pure marginal effec... (read more)

I disagree with your point that saving the child's life is something you need to continuously reinvest in[1]. But I do think that you're pointing at something adjacent more along the lines of:

  1. Giving out bed nets doesn't fundamentally solve global poverty
  2. Solving global poverty is better than continuously giving out bed nets
  3. Therefore, EAs should focus less on bed nets and more on solutions for global poverty.

I kind of agree with this. Imo the only real long-term solution is economic growth. But that said, two points:

  1. Saving a child's has positive flow through
... (read more)
11
Mo Putera
I think you're conflating intervention durability with outcome durability? A child who survives cerebral malaria due to seasonal malaria chemoprevention gets to live the rest of their life; SMC programs are rerun because (mostly) new beneficiary cohorts are at highest risk, not because last year's cohort's survival expires somehow. Similarly with nets and child vaccinations and vitamin A deficiency prevention (i.e. the GW top charities), as well as salt iodisation and TaRL in education and many other top interventions recommended by the likes of TLYCS and FP and so on. I'd also push back a bit on the "permanent solutions" phrasing. Infrastructure isn't that permanent and requires ongoing expenditures and has a shelf half-life (I used to work in ops in fluid resource-constrained environments so I feel this keenly), diseases can develop resistance to vaccines so you need boosters, etc. Ex-AIM CEO Joey Savoie has a great blog talking more about how Someone Always Pays: Why Nothing Is Really "Sustainable". Phrasing nitpicking aside, some big funders are in fact funding more "permanent / sustainable" solutions. Open Phil Coefficient Giving's new $120M Abundance and Growth Fund aims to "accelerate economic growth and boost scientific and technological progress while lowering the cost of living", and Founders Pledge (which is almost OP-scale in giving) just launched a new Catalytic Impact Fund that targets "ecosystem leverage points" where small investments can build "sustainable, long-term solutions to global poverty and suffering".  Jason's comment above on timetable speedup is essentially how e.g. GiveWell models their grants for malaria vaccines. The model says their grant would need to speed up co-administration for all highest need children in all of subsaharan Africa by at least 9 months to clear their 10x bar, so you can interpret their grant as a bet that funding that clinical trial would in fact achieve at least 9 months speedup. Notice how it's an uncertain b
6
Jason
Given EA's small share of the total global health/poverty funding landscape, the most likely effect of its investment on an expensive-but-permanent project is to speed the timetable up. So, for instance, perhaps we would get a hypothetical vaccine a year or two earlier if there had been EA investment. So, in comparing the effects of a yearly intervention vs. an expensive-but-permanent one, we are still looking at near-term effects that are relatively similar in nature and thus can be compared.  I don't suggest that is true for all "permanent" interventions, though, so it isn't a complete answer. It also might not scale well to a field in which EA funding is a large piece of the total funding pie.
Lizka
36
3
0
1

When thinking about the impacts of AI, I’ve found it useful to distinguish between different reasons for why automation in some area might be slow. In brief: 

  1. raw performance issues
  2. trust bottlenecks
  3. intrinsic premiums for “the human factor”
  4. adoption lag
  5. motivated/active protectionism towards humans

I’m posting this mainly because I’ve wanted to link to this a few times now when discussing questions like "how should we update on the shape of AI diffusion based on...?". Not sure how helpful it will be on its own!


In a bit more detail:

(1) Raw performance issue... (read more)

Load more