A semi-regular reminder that anybody who wants to join EA (or EA adjacent) online book clubs, I'm your guy.
Copying from a previous post:
...I run some online book clubs, some of which are explicitly EA and some of which are EA-adjacent: one on China as it relates to EA, one on professional development for EAs, and one on animal rights/welfare/advocacy. I don't like self-promoting, but I figure I should post this at least once on the EA Forum so that people can find it if they search for "book club" or "reading group." Details, including links for joining each
I live in Australia, and am interested in donating to the fundraising efforts of MIRI and Lightcone Infrastructure, to the tune of $2,000 USD for MIRI and $1,000 USD for Lightcone. Neither of these are tax-advantaged for me. Lightcone is tax advantaged in the US, and MIRI is tax advantaged in a few countries according to their website.
Anyone want to make a trade, where I donate the money to a tax-advantaged charity in Australia that you would otherwise donate to, and you make these donations? As I understand it, anything in Effective Altruism Austral...
Can confirm, and happy to vouch.
Tax-effective Australian charities and funds:
The mental health EA cause space should explore more experimental, scalable interventions, such as promoting anti-inflammatory diets at school/college cafeterias to reduce depression in young people, or using lighting design to reduce seasonal depression. What I've seen of this cause area so far seems focused on psychotherapy in low-income countries. I feel like we're missing some more out-of-the-box interventions here. Does anyone know of any relevant work along these lines?
i think this is a good idea, but perhaps better excecutrd even by "non mental health" people. if your expertise is in psychotherapy why ditch that enormous competitive advantage?
i also think the evidence base on this stuff isn't yet quite there? but I'm not up to date...
What are some resources for doing their own GPI that is longer than the couple months recommended in this 80k article but shorter than a lifetime's worth of work as a GPI researcher?
EAs are trying to win the "attention arms race" by not playing. I think this could be a mistake.
My much belated reply! On why I think short-form social media like Twitter and TikTok are good money chasing after bad, i.e., the medium is so broken and ill-designed in these cases, I think the best option is to just quit these platforms and focus on long-form stuff like YouTube, podcasts, blogs/newsletters (e.g. Medium, Substack), or what-have-you.
The most eloquent critic of Twitter is Ezra Klein. An from a transcript of his podcast, an episode recorded in December 2022:
...OK, Elon Musk and Twitter. Elon Musk — let me start with the part of this that I kn
Reading Will's post about the future of EA (here) I think that there is an option also to "hang around and see what happens". It seems valuable to have multiple similar communities. For a while I was more involved in EA, then more in rationalism. I can imagine being more involved in EA again.
A better earth would build a second suez canal, to ensure that we don't suffer trillions in damage if the first one gets stuck. Likewise, having 2 "think carefully about things movements" seems fine.
It hasn't always felt like this "two is better than one" feeling...
Rate limiting on the EA Forum is too strict. Given that people karma downvote because of disagreement, rather than because of quality or civility — or they judge quality and/or civility largely on the basis of what they agree or disagree with — there is a huge disincentive against expressing unpopular or controversial opinions (relative to the views of active EA Forum users, not necessarily relative to the general public or relevant expert communities) on certain topics.
This is a message I saw recently:

You aren't just rate limited for 24 hours once you fal...
I probably won't engage more with this conversation.
Here's some quick takes on what you can do if you want to contribute to AI safety or governance (they may generalise, but no guarantees). Paraphrased from a longer talk I gave, transcript here.
EA Connect 2025: Personal Takeaways
Background
I'm Ondřej Kubů, a postdoctoral researcher in mathematical physics at ICMAT Madrid, working on integrable Hamiltonian systems. I've engaged with EA ideas since around 2020—initially through reading and podcasts, then ACX meetups, and from 2023 more regularly with Prague EA (now EA Madrid after moving here). I took the GWWC 10% pledge during the event.
My EA focus is longtermist, primarily AI risk. My mathematical background has led me to take seriously arguments that alignment of superintelligent AI may face fund...
A rule of thumb that I follow for generating data visualizations: One story = one graph
Some made up stories and solutions:
Great rule of thumb :) I'm sometimes knee-deep in chartmaking before I realise I don't actually know exactly what I want to communicate.
Tangentially reminded me of Eugene Wei's suggestion to "remove the legend", in an essay that also attempted to illustrate how to implement Ed Tufte's advice from his cult bestseller The Visual Display of Quantitative Information.
I'd also like to signal-boost the excellent chart guides from storytelling with data.
I wrote a short intro to stealth (the radar evasion kind). I was irritated by how bad existing online introductions are, so I wrote my own!
I'm not going to pretend it has direct EA implications. But one thing that I've updated more towards in the last few years is how surprisingly limited and inefficient the information environment is. Like obvious concepts known to humanity for decades or centuries don't have clear explanations online, obvious and very important trends have very few people drawing attention to them, you can just write the best book review...
Yeah, while I think truth-seeking is a real thing I agree it's often hard to judge in practice and vulnerable to being a weasel word.
Basically I have two concerns with deferring to experts. First is that when the world lacks people with true subject matter expertise, whoever has the most prestige--maybe not CEOs but certainly mainstream researchers on slightly related questions-- will be seen as experts and we will need to worry about deferring to them.
Second, because EA topics are selected for being too weird/unpopular to attract mainstream attention/fund...
I have the impression that the most effective interventions, especially in global health/poverty, are usually temporary, in the sense that you need to keep reinvesting regularly, usually because the intervention provides a consumable good; for example malaria chemoprevention: it needs to be provided yearly. In contrast, solutions that seem more permanent in the long-term (e.g. a hypothetical malaria vaccination, or building infrastructure), are typically much less cost-effective on the margin because of their high cost.
How do we balance pure marginal effec...
I disagree with your point that saving the child's life is something you need to continuously reinvest in[1]. But I do think that you're pointing at something adjacent more along the lines of:
I kind of agree with this. Imo the only real long-term solution is economic growth. But that said, two points:
When thinking about the impacts of AI, I’ve found it useful to distinguish between different reasons for why automation in some area might be slow. In brief:
I’m posting this mainly because I’ve wanted to link to this a few times now when discussing questions like "how should we update on the shape of AI diffusion based on...?". Not sure how helpful it will be on its own!
In a bit more detail:
(1) Raw performance issue...
It looks like you might have to wait till tomorrow for a 'Donation Election Winners' post (though you can see the final votes on the banner). Until then - here are the anonymised votes to play with.
The "areas of expertise" and, to a lesser extent, the "areas of interest" features seem off on Swapcard. Many people put 5+ areas as their expertise. This is not only unlikely, but dilutes the filtering feature. Some people also don't put in anything (I think?), which means they will be left out of my search, even if they would be relevant to talk to.
Suggested improvement: make it compulsory to add at least one area of expertise, but cap it at 3, so people don't just put in everything.
Praise for Sentient Futures
By now, I have had the chance to meet most staff at Sentient Futures, and I think they really capture the best that EA has to offer, both in terms of their organisational goals and culture.
They are kind, compassionate, impartial, frugal - the things that I feel like the movement compromised on in the past years in pursuit of trying to save us from AI.
I really hope this kind of culture becomes more prominent in the 4th wave of EA[1], with similar organisations popping up in the coming months and years.
PS.: I have frien...