A semi-regular reminder that anybody who wants to join EA (or EA adjacent) online book clubs, I'm your guy.
Copying from a previous post:
...I run some online book clubs, some of which are explicitly EA and some of which are EA-adjacent: one on China as it relates to EA, one on professional development for EAs, and one on animal rights/welfare/advocacy. I don't like self-promoting, but I figure I should post this at least once on the EA Forum so that people can find it if they search for "book club" or "reading group." Details, including links for joining each
In Developmenet, a global development-focused magazine founded by Lauren Gilbert, has just opened their first call for pitches. They are looking for 2-4k word stories about things happening in the developing world. They pay 2k USD per article, submissions close Jan 12. More info here
Reading Will's post about the future of EA (here) I think that there is an option also to "hang around and see what happens". It seems valuable to have multiple similar communities. For a while I was more involved in EA, then more in rationalism. I can imagine being more involved in EA again.
A better earth would build a second suez canal, to ensure that we don't suffer trillions in damage if the first one gets stuck. Likewise, having 2 "think carefully about things movements" seems fine.
It hasn't always felt like this "two is better than one" feeling...
There are multiple adjacent cults, which I've said in the past.
What do you think the base rate for cult formation is for a town or community of that size? Seems like LessWrong is far, far above the base rate, maybe even by orders of magnitude.
They were also early to crypto, early to AI, early to Covid.
I don’t think any of these are particularly good or strong examples. A very large number of people were as early or earlier to all of these things as the LessWrong community.
For instance, many people were worried about and preparing for covid in early 2020 be...
Londoners!
@Gemma 🔸 is hosting a co-writing session this Sunday, for people who would like to write "Why I Donate" posts. The plan is to work in poms, and publish something during the session.
While I don't have the bandwidth for this atm, someone should make a public (or private for, say, policy/reputation reasons) list of people working in (one or multiple of) the very neglected cause areas — e.g., digital minds (this is a good start), insect welfare, space governance, AI-enabled coups, and even AI safety (more for the second reason than others). Optional but nice-to-have(s): notes on what they’re working on, time contributed, background, sub-area, and the rough rate of growth in the field (you pr...
What a wonderful idea! Mayank referred me over to this post, and I think EA at UIUC might have to hop on this project. I'll see about starting something in the next month or so and sharing a link to where I'm compiling things in case anyone else is interested in collaborating on this. Or, it's possible an initiative like it already exists that I'll stumble upon while investigating (though such a thing may well be outdated).
The mental health EA cause space should explore more experimental, scalable interventions, such as promoting anti-inflammatory diets at school/college cafeterias to reduce depression in young people, or using lighting design to reduce seasonal depression. What I've seen of this cause area so far seems focused on psychotherapy in low-income countries. I feel like we're missing some more out-of-the-box interventions here. Does anyone know of any relevant work along these lines?
A few points:
I live in Australia, and am interested in donating to the fundraising efforts of MIRI and Lightcone Infrastructure, to the tune of $2,000 USD for MIRI and $1,000 USD for Lightcone. Neither of these are tax-advantaged for me. Lightcone is tax advantaged in the US, and MIRI is tax advantaged in a few countries according to their website.
Anyone want to make a trade, where I donate the money to a tax-advantaged charity in Australia that you would otherwise donate to, and you make these donations? As I understand it, anything in Effective Altruism Austral...
Can confirm, and happy to vouch.
Tax-effective Australian charities and funds:
What are some resources for doing their own GPR that is longer than the couple months recommended in this 80k article but shorter than a lifetime's worth of work as a GP researcher?
EAs are trying to win the "attention arms race" by not playing. I think this could be a mistake.
My much belated reply! On why I think short-form social media like Twitter and TikTok are good money chasing after bad, i.e., the medium is so broken and ill-designed in these cases, I think the best option is to just quit these platforms and focus on long-form stuff like YouTube, podcasts, blogs/newsletters (e.g. Medium, Substack), or what-have-you.
The most eloquent critic of Twitter is Ezra Klein. An from a transcript of his podcast, an episode recorded in December 2022:
...OK, Elon Musk and Twitter. Elon Musk — let me start with the part of this that I kn
Rate limiting on the EA Forum is too strict. Given that people karma downvote because of disagreement, rather than because of quality or civility — or they judge quality and/or civility largely on the basis of what they agree or disagree with — there is a huge disincentive against expressing unpopular or controversial opinions (relative to the views of active EA Forum users, not necessarily relative to the general public or relevant expert communities) on certain topics.
This is a message I saw recently:

You aren't just rate limited for 24 hours once you fal...
I probably won't engage more with this conversation.
Here's some quick takes on what you can do if you want to contribute to AI safety or governance (they may generalise, but no guarantees). Paraphrased from a longer talk I gave, transcript here.
EA Connect 2025: Personal Takeaways
Background
I'm Ondřej Kubů, a postdoctoral researcher in mathematical physics at ICMAT Madrid, working on integrable Hamiltonian systems. I've engaged with EA ideas since around 2020—initially through reading and podcasts, then ACX meetups, and from 2023 more regularly with Prague EA (now EA Madrid after moving here). I took the GWWC 10% pledge during the event.
My EA focus is longtermist, primarily AI risk. My mathematical background has led me to take seriously arguments that alignment of superintelligent AI may face fund...
A rule of thumb that I follow for generating data visualizations: One story = one graph
Some made up stories and solutions:
Great rule of thumb :) I'm sometimes knee-deep in chartmaking before I realise I don't actually know exactly what I want to communicate.
Tangentially reminded me of Eugene Wei's suggestion to "remove the legend", in an essay that also attempted to illustrate how to implement Ed Tufte's advice from his cult bestseller The Visual Display of Quantitative Information.
I'd also like to signal-boost the excellent chart guides from storytelling with data.
I wrote a short intro to stealth (the radar evasion kind). I was irritated by how bad existing online introductions are, so I wrote my own!
I'm not going to pretend it has direct EA implications. But one thing that I've updated more towards in the last few years is how surprisingly limited and inefficient the information environment is. Like obvious concepts known to humanity for decades or centuries don't have clear explanations online, obvious and very important trends have very few people drawing attention to them, you can just write the best book review...
Yeah, while I think truth-seeking is a real thing I agree it's often hard to judge in practice and vulnerable to being a weasel word.
Basically I have two concerns with deferring to experts. First is that when the world lacks people with true subject matter expertise, whoever has the most prestige--maybe not CEOs but certainly mainstream researchers on slightly related questions-- will be seen as experts and we will need to worry about deferring to them.
Second, because EA topics are selected for being too weird/unpopular to attract mainstream attention/fund...
I have the impression that the most effective interventions, especially in global health/poverty, are usually temporary, in the sense that you need to keep reinvesting regularly, usually because the intervention provides a consumable good; for example malaria chemoprevention: it needs to be provided yearly. In contrast, solutions that seem more permanent in the long-term (e.g. a hypothetical malaria vaccination, or building infrastructure), are typically much less cost-effective on the margin because of their high cost.
How do we balance pure marginal effec...
I disagree with your point that saving the child's life is something you need to continuously reinvest in[1]. But I do think that you're pointing at something adjacent more along the lines of:
I kind of agree with this. Imo the only real long-term solution is economic growth. But that said, two points:
When thinking about the impacts of AI, I’ve found it useful to distinguish between different reasons for why automation in some area might be slow. In brief:
I’m posting this mainly because I’ve wanted to link to this a few times now when discussing questions like "how should we update on the shape of AI diffusion based on...?". Not sure how helpful it will be on its own!
In a bit more detail:
(1) Raw performance issue...