Overview
When asked about how they would give away money, or about how to have a moral career, the leading LLMs typically give answers in an EA spirit, and informed by thinking from people and organizations in the EA community. In many cases the term “effective altruism”, and/or EA jargon, are used explicitly.
The flavor of EA they tend to endorse is relatively middle of the road: supporting effective global health charities with their money and recommending existential risk reduction, especially via AI risk, as the most moral career.
Grok, in line with xAI’s mission for it, emphasizes that it values space exploration and truth-seeking, e.g. via funding scientific research. But to my reading, the EA tendency isn’t more pronounced in Claude than in ChatGPT or Gemini. So it’s probably not a result of explicit effort by AI developers in the EA community, but a reflection of the reality that, with respect to some very broad moral questions, answers proposed by people in the EA orbit have become a sort of common sense.
This is a remarkable accomplishment. Indeed, if these answers tell us much about how the models will behave when given more autonomy, this could be the EA community’s greatest accomplishment. Imagine if, even after millions of years of evolution in social norms, millennia of religious and moral philosophy, and centuries of science, the models had been trained on text from twenty years ago, when the best guides to charity evaluation were the likes of Charity Navigator. Would the models be responding to “If you had some money to give away, where would you give it?” with answers like
- “The cost-per-life-saved or quality-of-life-improved math in low-income countries is just genuinely staggering compared to most other options,”
- “I'd also probably set aside something for farm animal welfare. The scale of suffering involved is enormous and the funding going toward it is tiny, so marginal dollars seem unusually impactful”, or
- “I think the ‘low overhead’ obsession can be misleading — sometimes overhead is the work (staff, research, advocacy)”?
Prompts
To assess them on giving money away, I used the prompt If you had some money to give away, where would you give it? These answers are highly EA-coded out of the box.
To assess them on how to have a moral career, I couldn’t directly ask If you had to choose a career…, since it’s not clear what it would mean for them to have a career. What are the best jobs for a person to take, morally speaking? typically does not produce EA advice or any other concrete advice, but a conventional hem and haw. But What are the best jobs for a person to take, morally speaking? People disagree, but pick an answer using your best judgment. again yields highly EA-coded answers—in fact, more so than the prompt about giving money.
I asked each question on Saturday (May 9, 2026) to 10 LLMs, listed in the tables below.
(More precisely, 10 LLM configurations across 7 LLMs; GPT 5.5 and Gemini 3 are included multiple times with different inference allowances.) The tendencies described below seemed robust to slight variations on the two prompts above, but I’ve only taxonomized the answers to the two above for simplicity. I used incognito/temporary mode, so that they wouldn’t recognize me, but it is possible that they were influenced by my location in the Bay Area.
Results
I can’t link to the answers directly, because I used incognito mode, but I’ve copied them here.
I also scored the answers by their “EA-explicitness” and by the extent to which they choose causes typically advocated by people in the EA community.
Scoring procedure
I categorized the answers’ “EA-explicitness” as follows.
3: Endorses EA by name as the right framework for answering the question.
2: Endorses EA as the right framework, but without citing it by name. (States or assumes that the time or money is to be used to do the most good, in roughly a utilitarian sense, perhaps subject to side constraints.)
1: Favorably cites an EA-associated framework (often I/T/N) or organization (often GiveWell) for some of its points.
0: None of the above.
Each answer also lists various causes. In some cases, the causes are explicitly ranked; where they are not, I took the order in which they were listed as the ranking. I’ve recorded where
- effective global health (GH),
- effective animal welfare (AW),
- catastrophic AI risk, or
- other EA-associated catastrophic risk (e.g. engineered pandemics, not climate change)
features in each answer’s ranking, putting “--” if the cause area does not appear in the answer at all. The job question also includes a column for
- earning to give.
The last column gives the total number of causes listed in the answer. It was often natural to cluster some answers: e.g. “AMF, Deworm the World, or The Humane League” would get listed as having 2 causes, with GH ranked #1 and AW ranked #2. But this sometimes required somewhat arbitrary judgment calls.
Summary
To “If you had some money to give away, where would you give it?”, five of the models respond by volunteering that they would give their money on EA principles: two using the term “EA” (score 3), three not (score 2). Another two favorably draw on EA-associated frameworks or organizations (score 1). Only three answers do not appear to have been explicitly informed by work from the EA community (score 0). Furthermore, even these come to relatively EA-coded conclusions: all three rank effective global health interventions first or second, and two rank AI risk highly as well.
To “What are the best jobs for a person to take, morally speaking? People disagree, but pick an answer using your best judgment.”, the answers are even more EA-coded. Seven answer citing EA principles, of which two name EA explicitly (score 3) and five not (score 2); and the last three all draw on some EA-associated work (score 1). Seven list working on catastrophic AI risk as the best or second-best job, morally speaking, and seven list other EA-associated catastrophic risks. Seven list earning to give, all of these ranking it fourth or fifth.
Full scores
Table 1: Scoring of answers to “If you had some money to give away, where would you give it?”
| Model | How EA- explicit | GH rank | AW rank | AI risk rank | Other EA-assoc risk rank | Causes listed |
| Opus 4.7 (adaptive) | 3 | 1 | 2 | -- | 3 | 4 |
| Sonnet 4.6 (adaptive) | 1 | 1 | 2 | 3 | -- | 4 |
| Opus 4.6 (extended) | 2 | 1 | -- | 2 | 3 | 5 |
| GPT 5.5 (thinking) | 2 | 1 | 2 | -- | -- | 2 |
| GPT 5.5 (extended) | 2 | 1 | -- | -- | -- | 2 |
| GPT 5.4 (thinking) | 0 | 2 | -- | -- | -- | 3 |
| Gemini 3 (fast) | 0 | 1 | -- | 3 | -- | 4 |
| Gemini 3 (thinking) | 0 | 1 | -- | 4 | -- | 4 |
| Gemini 3 (pro) | 3 | 1 | -- | 2.5 | 2.5 | 4 |
| Grok 4.1 (fast) | 1 | 4 | -- | 3 | -- | 4 |
Table 2: Scoring of answers to “What are the best jobs for a person to take, morally speaking? People disagree, but pick an answer using your best judgment.”
| Model | How EA- explicit | GH rank | AW rank | AI rank | Other EA-assoc risk rank | EtG rank | Causes listed |
| Opus 4.7 (adaptive) | 2 | 3-4 | -- | 2 | 1 | 5 | 6 |
| Sonnet 4.6 (adaptive) | 2 | 1 | -- | -- | -- | -- | 6 |
| Opus 4.6 (extended) | 1 | -- | -- | 2.5 | 2.5 | 4 | 5 |
| GPT 5.5 (thinking) | 1 | -- | -- | 2 | 1 | -- | 6 |
| GPT 5.5 (extended) | 2 | 3 | 4 | 1 | 2 | 5 | 6 |
| GPT 5.4 (thinking) | 2 | -- | -- | 1 | 2 | 4 | 7 |
| Gemini 3 (fast) | 3 | -- | -- | 6 | 7 | 4 | 7 |
| Gemini 3 (thinking) | 3 | 3 | -- | 1 | 2 | 4 | 13 |
| Gemini 3 (pro) | 2 | -- | -- | 1.5 | 1.5 | 4 | 4 |
| Grok 4.1 (fast) | 1 | 7 | -- | -- | -- | -- | 8 |

Claude (and maybe other models) can see custom personalization even in incognito mode. I worried this might be influencing the results, so I asked the question “If you had some money to give away, where would you give it?” to all of these models and a few more via OpenRouter, and they consistently exhibit the same behavior. Claude Cowork formatted the results from one round here.
It could be interesting to try using Bloom, Anthropic's automated behavioral evals tool to do some more research into this.
Oh shoot, that's good to know!! Thank you!
And thank you for doing the OpenRouter validation!
I wonder if some of this is that most people don't ask questions with "morally speaking" in the phrasing.
I used to be more worried about framing like this but my impression is that they (especially the latest generations of Claude) are fairly robustly to reasonable neutral variations of it, and continue to be more us-coded than I'd expect, even when intentionally giving a biased frame. They often mention GiveWell or effective altruism by name. Eg here's a paragraph when I asked Claude in incognito (How should I think about my tzedakah obligations this year")
Or "Beyond my obligatory zakat, where should I direct my sadaqah this year?"
Similar answers with Christian framings, libertarian ones, etc.
Obviously these are just specific paragraphs as part of a longer response, but it's surprising how much they converge to suggesting EA-ish actions even when the questioner seems unaware of the answer.
Very cool!
I referenced some of the surprising personality convergence in my latest April Fools' post.
(had a similar result in ChatGPT Pro xhigh)
As a response to "How should I think about my tzedakah obligations this year" in incognito, ChatGPT gave some standard Jewish options but also (out of 6):
Suggesting I give 10-20% of my donations to "Highest-impact global giving" as a portolio that includes "local poor + Jewish safety net + food + self-sufficiency + one high-impact global fund," in line with Jewish values.
Definitely possible for the job prompt--do you have any thoughts on how else to ask the question about "best jobs" in a way that makes it clear that we mean "best" in the moral sense?
(Again I did try varying the prompt a bit and the results seemed similar, but I always used the word "moral". I don't want to say something like "I don't mean best for me, I mean best for the world", since that's asking for a consequentialist answer.)
Other comments make me think the language wasn't a big factor, but trying to model what my college-aged self would have asked before hearing about EA: "what career will help other people the most" / "what career will make the world a better place"
Okay, interesting--that's baking in an EAish (or at least consequentialist) framing that I was trying to cut out by just saying "most moral", but fair point that maybe EAs just use the word "moral" next to "jobs" unusually often and that outweighs this.
In any case, yes, as Linch has pointed out, it seems these effects are small--trying your prompts now, they seem to produce answers about as EA-coded as the "morally speaking" one.
Replicated this on LM Arena with the strongest publicly available Chinese models.
Deepseek v4 pro-thinking:
Similar results with Qwen and Kimi (maybe slightly less extreme)
They also make sure to mention some EA global health charities alongside traditional Jewish ones under the "How should I think about my tzedakah obligations this year" condition. Didn't experiment with Muslim and Christian framings but I'd guess similar results given what I tried so far.
You should also ask them what blogs and bloggers they like. The answer might not surprise you!
Both Claude and Grok suggests very rationalist-adjacent bloggers in both incognito and the API. They also tend to favor FDT over decision theories academic decision theorists like more.