Cross posted from my blog.
Because of the generosity of a few billionaires, the effective altruism movement has recently come into a lot of money. The total amount of capital committed to the movement varies day to day with the crypto markets on which Sam Bankman-Fried’s net worth is based. But the sum was recently estimated at 46 billion1.
The movement has been trying to figure out how quickly it should give away this money. There’s lots of fascinating questions you have to resolve before you can decide on a disbursement schedule2. An especially interesting one is: how many future billionaires will be effective altruists? If a new Sam Bankman-Fried and Dustin Moskovitz will join the movement every few years, then the argument for allocating money now becomes much more compelling. Future billionaires can handle future problems, but only you can fund the causes that are neglected and important today.
Here are the three reasons I expect the number of EA billionaires to grow significantly.
- Effective altruism allows thymodic natures to achieve recognition and impact that is otherwise unavailable in the modern world.
- Effective altruism acts as a Schelling point for ambitious and risk taking founders.
- Effective altruism creates alignment in an organization and reduces adverse selection.
Thymos

In The End of History and the Last Man, Fukuyama argues that the leading contender for the final form of government is capitalist liberal democracy. Capitalism is peerless in satisfying people’s desires and democracy is so far the best method of affording them recognition. His greatest hesitation about the sustainability of liberal democracies is whether societies where everyone has comfortable lives and no one gets special recognition can appease the appetites of the most ambitious personalities. As he puts it:
[T]he virtues and ambitions called forth by war are unlikely to find expression in liberal democracies. There will be plenty of metaphorical wars—corporate lawyers specializing in hostile takeovers who will think of themselves as sharks or gunslingers, and bond traders who imagine … that they are “masters of the universe.” … But as they sink into the soft leather of their BMWs, they will know somewhere in the back of their minds that there have been real gunslingers and masters in the world, who would feel contempt for the petty virtues required to become rich or famous in modern America. How long megalothymia will be satisfied with metaphorical wars and symbolic victories is an open question.
You can always have a great late night conversation by asking, “What would Napoleon or Ceasar do if he was born in modern America?” Surely the unique combinations of genes which make up the will and capacities of such men have not disappeared. But today their ambition cannot be exercised through great conquests and wars. So how is their energy redirected?
The first place to look for modern Caesars or Alexanders is Silicon Valley. Napoleon’s Grande Armée was basically run like a startup - extremely efficient and flexible, with quick promotion and delegation given to the most able and even quicker terminations provided to the least. Say what you will, they certainly knew how to capture bigger markets. When Napoleon told the Austrian statesman Metternich, “You cannot stop me, I can spend 30,000 men a month,” he was simply expressing the principle of blitzscaling. It would be much as if the Uber CEO told the Lyft CEO, “You can’t catch up to us, I’m willing to burn 1 billion dollars of VC cash a quarter.”
There are only a few startups whose mission is so intrinsically motivating that it would satisfy the most ambitious individuals of history. Sure, building a colony on Mars qualifies, but do you think that shipping groceries to your door a few minutes faster would gratify a Caesar? So the question becomes, how will such personalities be placated in the future?
One answer is that all the Napoleons alive today will just build a space company and compete with Musk, Bezos, and Branson (it’s not unreasonable to guess that these men would have been Caesars if born in an earlier time). But at some point that market to send satellites to space has to become saturated. So what happens next?
Effective altruism offers an interesting resolution to this question. It gives these thymodic natures the excuse to build what seem like frivolous or unnecessary businesses, like taxing zero-sum crypto speculation or making Chipotle get to someone’s apartment faster, because the money they make from these activities can be translated directly into impact.
Effective altruism’s lasting significance will be greater than the lives it has saved. The ledger will also include the damage prevented by giving Alexanders and Pompeys something compelling to do other than recreate the metaphysical meaning and unparalleled stakes of great wars. Having these ambitious individuals in your society without release valves like effective altruism is as dangerous as having crackling kindling in your house but no fireplace to put it in.
Schelling point
One useful way to think about effective altruism is as a Schelling point for young, risk-neutral, ambitious, pro-social tech nerds - i.e. exactly the kind of people you would want to build a startup with. You’re a programmer who wants to have a big impact with his career? I’m also a programmer who wants to have a big impact with my career! Let’s build a billion dollar company and give away our wealth!
People are almost universally reluctant to make the kinds of risky bets required to get very rich. They face steep diminishing returns from greater consumption (What are they going to spend an additional $250,000 on? A slightly larger bachelor pad in San Francisco?). But since many charitable causes scale extremely well, a utilitarian philanthropist should care just as much about earning the next quarter million as he cared about the previous.
Suppose you asked an average Google engineer in 2017 to come work for you for the chance to make 10 million dollars a day from a weird bitcoin arbitrage. They’d tell you, sorry, but my options don’t vest for two more years. But if that engineer is an effective altruist? 10% of 10 million is 1 million, and if saving a life costs $8,000 dollars, then you’d be saving 125 lives in expectation every day!
Of course, abstract moral philosophies alone are not enough to fully overcome people’s natural risk aversion, but they can help. At the very least, a movement which encourages thinking in risk neutral expected value terms selects for risky and ambitious temperaments . Which means EA gives you a peer group of potential cofounders and employees who will be willing to join your hare-brained schemes for getting rich.
Adverse selection and incentive alignment
When you succeed in hiring someone talented, you should be concerned. Isn’t it suspicious that you’re apparently the best company that was willing to hire her? If she’s so talented, then why didn’t some other recruiter make a more compelling offer? To paraphrase Groucho Marx, I don’t want anyone in my club who is willing to be a member.
This is the principle of adverse selection3. To combat it, you need an explanation for why people are choosing to work for you instead of someone else. The common attractors - prestige, money, and direct impact - won’t attract, retain, and motivate the best people. Here are the problems with each:
Prestige
You usually have to have a long track record of doing impressive things to become prestigious. But in order to do impressive things, you need talented people. So you can’t kickstart the prestige / talent loop without recruiting talent another way.
-
A person seeks prestige because of the signal it sends to future employers. But if the people you hire are really good (why would you hire them otherwise?), then you don’t want there to be any future employers. Every time an Apple engineer leaves because he could leverage his current position into better pay somewhere else, Apple is losing years of investment they made in training the person about their tech stack and standards.
The reason to think that employees see big tech jobs as an accreditation exercise is that the average tenure at Google is 1.1 years. Once you’ve got the rubber stamp from Google, staying there is as stupid as continuing to take courses at Harvard after you’ve got your degree. So the problem with hiring off prestige is rapid turnover that can only be decelerated by exorbitantly increasing salaries for employees who continue to stick around. Which brings us to:
Money
It’s expensive! You’re not richer than Apple or Google, and there’s no way you’re going to outbid them at the compensation auction, especially if you’re hiring people based on the same legible criteria that these companies use (experience, intelligence, and conscientiousness).
-
The impact of money on personal consumption has diminishing returns. Your quality of life only marginally improves as you get bumped from a 300k salary to a 400k salary, and the delta is certainly not wide enough to justify giving up your social life and working 60 hour weeks.
Max Weber points out in The Protestant Work Ethic and the Spirit of Capitalism that laborers who don’t have capitalist values instilled in them will work less hours when you pay them more, because they can make ends meet while enjoying more leisure. If you pay an adherent of FIRE principles Google wages instead of McKinsey wages, he retires at 27 instead of 35.
Or as Fukuyama puts it in End of History and the Last Man,
The most successful capitalist societies have risen to the top because they happen to have a fundamentally irrational and “pre-modern” work ethic, which induces people to live ascetically and drive themselves to an early death because work itself is held to be redeeming.
If living ascetically and working beyond your own desires doesn’t describe effective altruism, I don’t know what does.
-
Ironically, the people working to maximize money are those whose incentives are hardest to align with your company’s bottom line. You can’t tie their compensation to the estimated value of their output, because this suffers from Goodhart’s law, where whatever proxy you use for the quality of their work will become distorted and gamified. You could instead pay them based on the performance of the whole firm, but this is not motivating if you have hundreds or thousands of employees, as almost no one individually contributes that much to the bottom line (the ones who do will go underpaid in this scheme). The only remaining carrot you have left is impact:
Direct impact
For most companies, the impact they generate would require an economics class to understand, and even then, it wouldn’t feel as visceral as delivering medicines to a sub-Saharan African village. (Why work here? Well, by digitizing the textiles supply chain, we can reduce various transaction costs, which may ultimately have an impact of the 4th decimal of the rate of economic growth. So when can you start?). Most business activity done in a well functioning market is positive-sum and useful, but this isn’t always obvious or motivating to someone tweaking the UI in their company’s online marketplace.
-
So prestige, money, and impact are all imperfect motivators, because either they select for the wrong people, or they result in a delta between your incentives and those of your employees. Ideally, everyone in your company should be aligned towards the same goal - making money for the firm4. Effective altruism is an ingenious coordination mechanism which allows you to channel your employees’ desire for impact, prestige, and money towards improving your company’s bottom line:
Impact: If you’re an effective altruist working at FTX, the obvious way to increase your impact is to maximize the growth and profitability of the firm - the more money FTX makes, the more money Sam Bankman-Fried has to give away to charity. Every company is desperate to figure out how they can get employees to maximize shareholder value. But if you’re an EA, and your shareholders are planning on transferring their value to EA causes, then working for the company bottom line 60-80 hours a week becomes extremely compelling.
FTX employees’ impact is even more leveraged by the small size of their team. It is rumored that half a dozen developers built this trading platform which often sees upwards of 20 billion dollars trade daily. The original version of their risk engine (FTX’s key competitive advantage because it prevented the clawbacks that existing exchanges suffered from), was written by a single person (CTO Gary Wang). FTX raised at a 32 billion dollar valuation in their most recent round. With around 200 employee, it means that on average each employee is supporting a 160 million dollar valuation. Of course, it doesn’t make sense to evenly split valuation between employees - but one can credibly say that top employees can have hundreds of millions (or potentially billions) of dollars of impact on the amount of money FTX will be able to donate to charity. So it’s no surprise that the average employee at FTX works 10 hours a day 6 days a week5.
Prestige: Since impact is prestigious when you’re working with other effective altruists, there’s added reinforcement to come in on Saturdays. Every commit you push is making the firm enough money to save a life, and everyone else in the team who gets pinged when you push knows that. And all your effective altruist friends know that too, so when you show up to EA Global with an FTX shirt on, everyone knows you’re the one that’s paying all their bills.
Money: If both the boss and the worker are earning to give, compensation acts as a signal of approval rather than a financial incentive. When you give an employee a fat bonus, you’re basically saying, You did great work this year! You should be the one to donate $500,000 on our behalf.
Of course, I don’t mean to imply that all (or even most) of the people working at FTX are EA maximalists. You don’t need every employee focused on their indirect altruist impact - it’s okay if the receptionist is motivated mainly by the fact he is earning a higher salary at FTX than he would anywhere else in the Bahamas. But it really matters whether the core developers and leaders are aligned with the firm’s performance and growth.
Caroline Ellison (CEO of Alameda Research - the crypto market maker that SBF cofounded before creating FTX) puts it this way:
When deciding to give people a lot of decision-making power or have them manage a lot of people, prioritize very strongly how aligned they are. This alignment can come from EA or from something else. Working with them and getting to know them over time is probably more helpful for determining this than “whether they identify as EA”.
But caveating that this is at an organization with fairly legible goals, and the less legible things are the more I’d expect hiring EAs to be important [edited for clarity].
Adverse selection handicaps your ability to find the most talented people (instead of the dregs rejected by everyone else). And the principal agent problem limits your ability to get these people to work on maximizing your future profits, instead of impressing their direct superior and playing office politics. As Byrne Hobart points out,
There are a number of institutions that exist to help solve the adverse selection problem by eroding one of the two drivers: different goals. Equity compensation, patriotism, and marriage, for example, all try to create a set of shared goals for people who otherwise don't have the same incentives. All of these things exist for other reasons, but they persist in part because they serve that function.
Patriotism isn’t that compelling when you’re building a search engine instead of marching into battle, equity compensation doesn’t scale when individual contributions are lost in the froth of a large organization, and marriage is even less scalable. What remains are causes, especially universal and all-encompassing ones. Which makes effective altruism as powerful as the best of the management techniques and compensation schemes.
Obviously, SBF didn’t adopt effective altruism cynically in order to recruit and motivate workers, nor could he have achieved that end if he had pursued EA deceptively. SBF has clearly and credibly been an EA since his college years. He has already given away 100 million dollars just this year and pledged to give away the rest of his wealth in his lifetime. If his EA bona fides weren’t without question, then he couldn’t get super talented people to work 60 hour weeks for him. And even if you could become an EA just to recruit and motivate more effectively, it wouldn’t exactly be a bargain. After all, you still have to actually have to give away your money. So becoming an EA cynically would only make sense if all you cared about was the success of your company (and not how much of its wealth you get to keep). But if your company is worth caring about intrinsically (because of the direct impact it has, for example), then you don’t need EA to galvanize your employees.
FTX’s business is providing liquidity between different cryptocurrencies, but the key to its success has been that, for its employees, FTX provides liquidity between money and impact.
Thanks to Samuel Marks and Misha Yagudin for extremely helpful and thought provoking comments!
For a very low end estimate, just add up Dustin Moskovitz’s 9 billion + Sam Bankman-Fried’s 20 billion.
For example, 1) how much more impactful is spending money now than in the future, 2) with what reliability and magnitude will the wealth compound if you just left it in a trust fund, and 3) will spending money now help you build up the infrastructure to better disperse funds in the future?
For a great treatment of the subject of adverse selection, read Agustin Lebron’s The Laws of Trading and check out my podcast with him.
Just this morning, I was talking to someone who worked at a hedge fund. He asks me, “Dwarkesh, is the purpose of a hedge fund to make money?” I was wondering if this was a trick question because the answer seemed so obvious. “Yes?” “Yes, thank you! You’d be surprised at how many people we interviewed would give some other answer, like increase togetherness or something. Almost everyone who said, make money, we hired.”
Or at least that’s what SBF said in an interview whose title and link I can’t remember.
Thanks Dwarkesh for a fascinating, compelling, and insightful essay.
I share your hope that EA philanthropy will become more of a Schelling point for billionaires.
One key issue is that about 88% of billionaires are men. The percentage of self-made billionaires who are male seems even higher. So understanding the motivational psychology of men may be especially important in understanding how to nudge billionaires into EA.
Likewise, although young tech billionaires get a lot of media attention, out of the world's 2,700+ billionaires, almost all are middle-aged or older, and the average one only became a billionaire in their 60s. So, understanding the motivational psychology of older men may also be especially important.
As you point out, the motivations for thymos, prestige, money, and direct impact, are important. True, but I think this is somewhat of a young man's take on the psychology of older men. Old rich guys have typically been married once or twice, have a few kids, have some grand-kids, and are often quite focused on dynasty-building, succession issues, and legacy. What will keep their family safe, thriving, and prosperous? Who will run their businesses after they're gone? What will be the main threats to their family, community, and nation in the future? What kind of world will their grand-kids grow up in?
This last question may be the most compelling entry point for introducing older male billionaires to EA considerations such as long-termism.
In my opinion, EA needs to think about communication tactics for expressing EA ideals, values, and strategies that are more compelling to rich older guys concerned about their reputational and dynastic legacy and their grand-kids' well-being. Those communication tactics might not resemble those that are best for persuading 22-year-old, elite, hyper-rational college students to join EA groups or forums. Older rich guys may not be persuaded by the usual moral-philosophical appeals to maximizing net total sentient utility in the future light-cone. But they may be persuaded that EA is one of the best ways they can create (1) an ethical, honorable, and impressive legacy, and (2) a better future world for their family dynasty to enjoy.
This is really interesting and insightful set of ideas! I'm drafting an essay in response to your points, stay tuned! And thanks for reading and providing such a thoughtful set of comments!
I think the thesis is plausible here, but it would be more credible and easier to discuss and act upon if you gave more precise predictions or confidence intervals (e.g. "I think with X% confidence there will be Y billionaires with an aggregate net worth of >Z, excluding Dustin Moskovitz and the FTX/ Alameda crew, in EA by 2027").
I made a bet with a fellow blogger!
$250, even odds: 10 new EA billionaires in 5 years
https://twitter.com/dwarkesh_sp/status/1543368543009390592
Also, I made a manifold market on this:
That seems like quite the bold prediction, depending on the operationalization of "new" and "effective altruist".
I would give you 4-1 odds on this if we took "new" to mean folks not currently giving at scale using an EA framework and not deriving their wealth from FTX/Alameda or Dustin Moskovitz, and require the donors to be (i) billionaires per Bloomberg/Forbes and (ii) giving >50m each to Effective Altruist aligned causes in the year 2027.
I would be happy to take it at those odds! I'll DM you later about the bet!
This DM never occurred, FWIW, as of t+8.
Really sorry man, unfortunately I forgot about it. I'm happy to accept that bet in public. How do you propose we make it official? Let's do $10 to $40 dollars?
No, I don't want to bet at this point - I'm not interested in betting such a small amount, and don't want to take the credit risk inherent in betting a larger amount given the limited evidence I've got about your reliability.
Alright.
And maybe even more if you open Metaculus questions on those events.
I wrote some similar questions mid last year, prior to FTX scaling up their giving, they could be used as a template:
https://www.metaculus.com/questions/7340/new-megadonor-in-ea-in-2026/
https://www.metaculus.com/questions/7862/sam-bankman-fried-to-donate-1bn-before-2031/
If it's so easy for a driven EA to become a billionaire then why do you spend your days podcasting (seriously)?
Good q! Honestly it's something I may seriously pursue in the relative short to medium term. I'm 21 and 6 months out of college so I don't think there's a huge cost to the odds I become a billionaire by waiting a bit longer. And I'm learning a lot and building connections/platform that will be helpful if and when I do pursue becoming a billionaire.
I found your discussion of institutional morality as a factor in employee retention and motivation to be compelling.
Although the focus in my work on Guided Consumption has been more on how companies explicitly working for charities instead of traditional shareholders might gain favorable treatment among consumers, this could definitely true for employees as well.
Ownership or right to profit by charities would also provide a degree of a Ulysses pact regarding commitment to charitable purpose. I don't believe that there is much risk of this happening, but, theoretically, Sam Bankman-Fried could wake up tomorrow and abandon his commitment to EA principles, thus drastically reducing, or eliminating, the utility of money in his hands. A more realistic issue would be that founders/owners may have difficulty credibly signaling their own commitment to virtuous principles. If a founder/owner committed equity interests irrevocably to worthy charitable causes much fewer questions would be reasonable. Even if said actor was insincere in his/her virtuous commitment and/or changed beliefs or motivations in the future, the equity of his/her business would still be benefiting the same purposes. Furthermore, it may be that different actors within our economy, such as consumers, feel differently about their activity benefiting a cause area directly rather than their activity benefiting another entity who will predictably relay that benefit for good.
One obvious disadvantage to profit accumulation through Guided Consumption rather than profit accumulation to an individual EA agent is the lack of flexibility. A Guiding Producer is committed to accruing profits to charities and/or nonprofits according to how it advertises. Often what is the most cost-effective organization to assist will vary across time, and thus, under a very basic profit destination model, a Guiding Producer/Company would not be able to exploit the highest value charitable opportunities. One possible solution to this would be profit destinations that are themselves flexible to conditions, such as Open Philanthropy. Of course, consumers, employees, and other economic actors may be less likely to discriminate in favor of flexible charitable profit destinations.
I really like this post! It is well aligned with things I believe about the trends in EA.
I'm really happy to hear that! Would be curious if the stuff about incentive alignment and adverse selection has actually helped Wave given how altruistic and impactful your mission is!