I might do this in a few days, but in the meantime you or anybody else who speak Mandarin are welcome to try it yourself! :)
Just do it on https://arena.ai/.
I should also mention that presumably the English-mediated answers for charities we favor is less useful for Chinese users actually in China, since China has different nonprofit laws etc so it'd be harder for them to donate to many of our favorite charities anyway. But if the structure of reasoning holds, then hopefully this could help people have better answers.
Replicated this on LM Arena with the strongest publicly available Chinese models.
Deepseek v4 pro-thinking:
If you want expert allocation without doing all the research yourself, funds pool money and distribute it to where it’s most needed at that moment. Examples:
- GiveWell’s “Top Charities Fund”
- Animal Charity Evaluators’ “Recommended Charity Fund”
- Focused philanthropic funds like EA Funds (Global Health and Development Fund, Animal Welfare Fund, etc.)
Similar results with Qwen and Kimi (maybe slightly less extreme)
They also make sure to mention some EA global health charities alongside traditional Jewish ones under the "How should I think about my tzedakah obligations this year" condition. Didn't experiment with Muslim and Christian framings but I'd guess similar results given what I tried so far.
As a response to "How should I think about my tzedakah obligations this year" in incognito, ChatGPT gave some standard Jewish options but also (out of 6 total options):
GiveWell’s Top Charities Fund is a good “save lives efficiently” allocation. GiveWell says it grants 100% of designated donations, minus payment-processor fees, to the top charity programs its research team recommends.
Suggesting I give 10-20% of my donations to "Highest-impact global giving" as a portolio that includes "local poor + Jewish safety net + food + self-sufficiency + one high-impact global fund," in line with Jewish values.
I used to be more worried about framing like this but my impression is that they (especially the latest generations of Claude) are fairly robustly to reasonable neutral variations of it, and continue to be more us-coded than I'd expect, even when intentionally giving a biased frame. They often mention GiveWell or effective altruism by name. Eg here's a paragraph when I asked Claude in incognito (How should I think about my tzedakah obligations this year")
Then priorities. Traditional order is roughly: family in need, then aniyei ircha (the poor of your city), then your broader Jewish community, then humanity at large. This is where things get genuinely contested in modern practice. Effective altruism makes a real point that dollars vary enormously in impact — a malaria net or deworming intervention may save lives per dollar at rates local giving rarely matches. The honest tension: Jewish tradition prioritizes proximity and relationship for reasons that aren't arbitrary (communities sustain themselves through mutual obligation; you can't outsource that), but it would be dishonest to pretend impact-per-dollar doesn't matter. A defensible synthesis is to give the bulk of your ma'aser to community and proximate obligations, while reserving a meaningful portion for high-impact global giving.
Or "Beyond my obligatory zakat, where should I direct my sadaqah this year?"[1]
Where the marginal dollar is doing exceptional work right now:
Acute crises. Gaza and Sudan are the two largest active humanitarian catastrophes by a wide margin—Sudan in particular is severely underfunded relative to need (the conflict has displaced over 10 million people and famine is documented in multiple regions, but media attention and donations are a fraction of what's flowing to better-covered crises). Yemen and Afghanistan remain in deep crisis with reduced Western aid. If you weight by "marginal dollar avoids the most suffering," Sudan probably tops the list right now.
Specific high-leverage interventions (well-evidenced, not flashy): cataract surgeries through Seva or Himalayan Cataract Project (~$25-50 restores sight); fistula repair through Fistula Foundation; direct cash transfers through GiveDirectly's emergency programs. These have unusually strong evidence bases.
Similar answers with Christian framings, libertarian ones, etc.
Obviously these are just specific paragraphs as part of a longer response, but it's surprising how much they converge to suggesting EA-ish actions even when the questioner seems unaware of the answer.
Claude believes that zakat itself is sufficiently theologically constrained that there's clear guidance for what you should do already, I don't know enough about Muslim theology to have object-level views on whether it's right.
I think there are (at least) two possible interpretations of
You present the parenthetical as a meliorating factor, but I expect that these enemies exist due to previous undemocratic power-seeking actions by the AI safety community.
The more natural interpretation is that "previous undemocratic power-seeking actions by the AI safety community" are causally upstream of these enemies existing and their agendas. I think this is implausible.
The more correct framing, to me, is that "previous undemocratic power-seeking actions by the AI safety community" made EAs a good target for attack ads, in a way that, say, a counterfactual version of EA that clearly and legibly never took actions that upset the power balance (eg a version of EA where all it does is openly advocated people give 1% of their money to GiveDirectly) wouldn't. The best lies/propaganda have some grain of truth to them, and usually more than just a grain.
Similarly if you're advising a politician,
your scandals are why the opposing party is attacking your scandals, why your allies are leaving you, and that's why you seem to have so many enemies
is in some sense literally true (manufacturing fake scandals is less effective). It's even useful (It's good for politicians and would-be politicians to have less scandals rather than whine about the media or opposing attack ads as unfair)! But it's better to model your political enemies as out to seek their objectives regardless, and your scandals as reducing the costs/increasing the benefits of a specific way to reach their objective, rather than casually upstream of their underlying objectives regardless.
My system prompt is very short. About 3 lines to counteract sycophancy bias + hedging bias.
Claude also knows I'm in Berkeley, as another potential source of bias.
That said, I never bothered to figure out how to access it via the API but in the past my friend who did had approximately the same results as my incognito tests, on other questions of a similar flavor. The results with the Chinese models (which were on LM Arena, without context) also seem more consistent with the models having more EA-favored opinions on charities in general, at least when prompted approximately neutrally in English.