Much has been said about SBF having been EA’s biggest blunder. This was an error of commission, i.e. resulting from an action. But as seasoned investors will tell you: errors of omission, which result from a lack of action, can be far more consequential. For those who look, there’s a glaring divergence in the choices of Dustin Moskovitz and Mark Zuckerberg after each became multi-billionaires. Among the co-founders of Facebook, Moskovitz seems to have been one of Zuckerberg's closest friends, if not the closest. This video shows how Moskovitz left Facebook on good terms and that the two seemed to have had a great deal of respect for each other. Despite the widespread criticism of Meta in the past several years, Moskovitz recently defended Meta and Zuckerberg in an interview without equivocation.

After leaving Facebook, Moskovitz chose the road less traveled. He went on to be the youngest person to sign the Giving Pledge and, as most of us know, the largest financial backer of EA causes. Far more unusual – and admirable in my eyes – is his resistance to indulge that he’s paired with his altruistic ambitions. As of 2012, he not only flew commercial but did so in economy seating. I have no way of knowing whether his personal spending is still so restrained. But the relatively low profile he’s kept is indirect evidence that he has. Despite his wealth and influence, very few people would recognize Moskovitz in person; and those who do are  likely not the type who'd pester him.  This makes it easier for him to lead a more normal life. My assumption is that his lifestyle is still exceptionally modest relative to others in his position.

If his actions have a message, it’s that the ultra-wealthy should donate nearly all of their resources and take great care to do so effectively. Moskovitz believes himself to have benefited from incredible luck (both of the conventional kind and  moral luck). At least partly as a result, he has a genuine desire to do the most good for others with his fortune. In his words:

I'm very fond of this quote from Louis C.K. (comedian) below and generally view the world through this lens: “I never viewed money as being my money, I always saw it as The money. It's a resource. If it pools up around me then it needs to be flushed back out into the system.” In other words, Cari and I are stewards of this capital. It's pooled up around us right now, but it belongs to the world.

Zuckerberg must be aware of Moskovitz’s worldview; it’d surprise me somewhat if Moskovitz had never gone out of his way to attempt to persuade Zuckerberg of it. I'd certainly be surprised  if the two hadn’t had at least a couple of deep conversations about morality and altruism. But Moskovitz does not strike me as someone who would be at all aggressive in his approach to convincing others. This feels like a natural place to disclaim that I suspect he would be uncomfortable on some level with parts of what I’ve already said and much of what I go on to say. 

Despite Moskovitz’s example and (what I imagine to have been) his efforts to persuade, Zuckerberg has more or less shrugged at altruism. He does not appear particularly motivated or strategic. I should mention here that I’ll sometimes write “Zuckerberg” when referring to actions that his wife, Priscilla Chan, is also responsible for – perhaps even more so given she seems more involved in their philanthropic activity. My main reason for doing so is that my focus is on Moskovitz’s influence (or lack thereof). And I don’t know anything about Chan and Moskovitz’s relationship. Also, because she kept her maiden name, it’s simpler syntax-wise.

From what I can tell, the share of Zuckerberg’s wealth that he has parted with philanthropically is about average for the Forbes 400 richest people, despite him being 3rd on the list.[1] I should note that he, too, has signed the Giving Pledge and at one point announced he’d give away ~99% of his wealth. Although he did so through an LLC, which is unusual and led many to question his motives and the ultimate result of this commitment. For now, my take is that Zuckerberg is lukewarm on altruism.

If hands-on philanthropy is not of interest – but he does care about maximizing the benefit of his fortune – Zuckerberg could have followed in the footsteps of Warren Buffet. That is, he could have simply given large sums to existing organizations with an exceptional track record, like the Gates Foundation, GiveDirectly, and GiveWell selected orgs.[2] Instead, he and Chan founded the Chan Zuckerberg Initiative (CZI). According to their website, CZI uses a ‘diversity, equity, and inclusion lens’ and works to:

  • support the science and technology that will make it possible to cure, prevent, or manage all diseases
  • ensure that every student – not just a lucky few – can get an education that’s tailored to their individual needs and supports every aspect of their development
  • support work to create greater opportunity for communities in the Bay Area and across California

There is virtually no sign that Moskovitz’s worldview has rubbed off on CZI or that their cause selection was deeply thought-through independently. And Zuckerberg seems to be at best moderately convinced of EA causes, e.g. biosecurity and global hunger. Further, he’s flying in the face of Moskovitz’s emphasis on issues like AI risk and animal welfare.[3]

So what is going on here?

As a general matter, I see a few reasons to explain why those who are in an exceptionally influential position fall far short of their impact potential (by giving relatively little or not giving strategically):

  • Greed or self-absorption and/or lack of motivation
  • Ignorance or skepticism: I’m not aware of compelling reasons to give; no one around me is doing it; or I am distrustful about how my donations will actually get used
  • Worldview difference: I just fundamentally disagree about how the world works or should work (e.g. Oscar Wilde vs. Pope Francis)

In the case of giving to certain causes, like x-risk, there is another factor:

  • Risk aversion: It’s better to take a 95% chance at doing Y amount of good than a 1% chance of doing 1,000 x Y amount of good.

Note: There aren’t always clear dividing lines between these categories; they’re not mutually exclusive and can interact in complex ways. (This could be a separate post if people are interested; or if someone has already fleshed this out, please share it!)

I think it’s safe to say that ignorance is not the reason for Zuckerberg’s lack of altruistic ambition. His friendship with Moskovitz alone is enough to convince me of this. I also find it hard to believe that risk aversion is the issue. The man turned down a $1 billion offer for Facebook at a time when everyone around him thought that taking the offer was a no-brainer; and invested many billions into VR and AR research in the face of Meta’s 80% share price collapse in 2022. 

There was little evidence of self-indulgence being the explanation prior to 2020. That’s when he started on a spending spree that included a mega yacht, an upgraded private jet, and massive land purchases in Hawaii and mysterious construction projects — and totaled around $600 million over a few years.[4]  While Moskovitz is putting his resources towards avoiding civilization-ending catastrophes for the benefit of humanity as a whole, Zuckerberg is spending millions to build a personal apocalypse bunker. Anyway, in his earlier years, I see no reason to think that Zuckerberg held back philanthropically because of his personal lavishness.

I also believe worldview differences are at play here. He seems sincerely skeptical that AI could become capable of disempowering humanity. But then again, he evidently seems to think that some apocalypse scenario is worth taking seriously (unless the bunker is just for funsies). Anyway, unlike early Facebook investor and former board member Peter Thiel – who has shared his heterodox beliefs in great detail – Zuckerberg has not given much insight into his deeply-held beliefs. I’ve listened to a few interviews, in which he’s explained his overarching motivation in life by saying things like ‘I just want to build cool things’. He certainly has wide-ranging interests, including history and language. But he’s consistently come across as more or less content with the way the world is or skeptical that anything other than technology can improve the human condition. Further, his recent commitment to political and moral abstinence (both leading up to and after the 2024 election) suggests that he squarely rejects the way Moskovitz sees the world.  


To be clear, I certainly did not expect Zuckerberg to simply copy and paste from any particular person’s worldview and/or altruistic approach. (Although, again, this is essentially what Warren Buffet did for many years with Bill Gates.) Still, I do find the extent to which Zuckerberg veered from Moskovitz to be puzzling and I assumed someone would eventually shed light on this. Yet this mystery remains more or less unremarked upon, as far as I can tell. So a large part of my motivation for writing this is sheer curiosity. If others have pondered this as well, I’m very interested in hearing your thoughts. 

I do wonder if there are lessons to learn here, though. One that stands out is that reason and argumentation alone are far too unreliable as both a direct and indirect way to get people bought into EA. To reiterate the gist of my main argument so far: an intelligent, thoughtful person was not perceptibly moved by the reasons provided by his trusted friend to do an epic amount of good with his vast resources. This does not bode well for the standard intellectualized EA approach. If we look beyond reason, there are two categories of emotional motivators at our disposal. 

The first is negative emotion. Although the soft, non-judgemental approach that EA has taken was a good instinct and may have been optimal, whether it remains so is something we should question. In addition to its failure to sway Zuckerberg and others like him, there is evidence for the efficacy of an aggressive approach. The thought of following their example in any sense – no matter how narrow – is off-putting to me and most reading this. But let’s face it: the obliteration of diplomacy and even decency on the opposite side of the spectrum from the likes of President Trump and Elon Musk has not backfired in the way most expected. Some would argue that their forceful and unfiltered brand is a feature not a bug. I am not suggesting we dispense with decency, and I have many reasons to doubt that aggressiveness is the answer. Also, although I have some rough ideas, I’m not confident that these tactics or any others hold much promise. I am, however, inclined to leave this option on the table.

A far less controversial tactic is to put much greater emphasis on positive emotion. Specifically, EA could do a lot more to increase its focus on storytelling through entertaining, emotionally-potent mediums, like TV.  “The Good Place” is an example of a TV show that seems to have had some influence and commercial success, despite its moral content. I think many in EA have recognized this and there’s been some trending in this direction. A recent 80,000 hours episode explored this possibility, for instance. Such an approach does come with much ambiguity and, I suspect, is not in the wheelhouse of many EAs. Still, I think it’s worthwhile to explore the possibility seriously.

  1. ^
  2. ^

     Open Philanthropy did not exist at the time the Chan Zuckerberg Initiative was started.

  3. ^

     Meta is leading the charge in developing open source AI models, which many experts argue is the riskiest approach; and the head of the company’s AI research efforts, Yann LeCun, is famously dismissive of AI x-risk. Zuckerberg is very public about his enthusiasm for eating meat (sometimes hilariously so) and the family has a breeder dog.

Show all footnotes
Comments8


Sorted by Click to highlight new comments since:

I don't know to what extent Moskowitz could have influenced Zuckerberg, but I am somewhat intrigued by the potential power of negative emotion that you bring up.

Ironically, one of the emotions that reflection on effective altruism has brought me is rather intense anger. The vast majority of people in developed countries have the ability to use their resources to save lives, significantly mitigate the mass torture of animals, or otherwise make the world a much better place with the power they have. Yet, even when confronted squarely with this opportunity, most do not do it. 

I think about other mass injustices and movements that have sought to address them and I think we remember that there was a place for righteous fury- I think of, for instance of women's suffrage or the civil rights movement. But yet, the attitude regarding EAs is often conciliatory, milquetoast, professorial... almost embarrassed to be holding beliefs in which the judgment of most humans is only a close corollary away.

I realize that in one-on-one interactions, a condemnatory approach is unlikely to gain us allies. But I wonder if a powerful engine for fighting global poverty, animal torture, and the continued existence of conscious life might be the activation of the emotion that such matters merit.

I can relate to lot of what you said. Also, that's an astute point you make about a more extreme being unlikely to help in one-on-one situations but having potential for shaping broader culture or changing the discourse. 

Here's one specific idea: a public record of how the ultra-rich use their money, which also converts the money they spend on themselves (e.g. on a yacht) into lives that could have been saved had they donated those funds. 

I disagree. I think the entire reason Dustin Moskovitz was able to begin liquidating and diversifying his Facebook/Meta stock is because Zuckerberg is holding down the fort as CEO and controlling shareholder with 58% voting rights. Zuckerberg assumed the fiduciary duty of actually running Meta to maximize shareholder return and even strategically got married the day after the Facebook IPO-just to make it super clear who was going to be in charge of the company.  Dustin Moskovitz is also in a similar position at Asana. 

Contrast that to FTX where Bankman-Fried was giving away company and shareholder money before FTX ever made any profit or even went public, and consequently EA suffered reputational damage and FTX fund recipients faced lawsuits and clawbacks from FTX shareholders and deposit holders.

Basically I think it can be a ....slippery slope-fiduciary duty wise, at least- asking billionaires that have an active role in running their companies to aggressively divest their shares for the sake of philanthropy.

I wasn't saying anything about timing or financial / liquidity strategy Zuckerberg should use. Rather, my main point was that Zuckerberg shows no signs of being convinced that either he should take altruism seriously or that, in doing so, EA has a lot to offer. 

I see where you're coming from and you're right. My point was, the actions of billionaire tech CEOs (and CEOs in general) tend to be put under a microscope by the public and news media merely because they are CEOs of influential companies. So most CEOs try not to do anything that invites more criticism or controversy. It's true that EA tries to be 'boringly effective' and 'verifiably high impact'. However, as of now EA is still a very niche social movement and it is emphatically not a 'plain and boring philanthropic choice' in the same way as donating to your local place of worship or the local YMCA is.

Zuckerberg has been mocked mercilessly for stupid things like 'looking like an alien' and 'sweating during congressional testimony'. He gave a $75 million donation to a bay area hospital and got a building named after his Harvard-educated doctor wife -and he still got criticized. The Chan Zuckerberg Initiative tried their best to toe the line and donate to 'woke' left wing causes and activists, and those activists once asked him to resign from his own charitable foundation. So now Zuckerberg is just doing zany-but-popular billionaire stuff like building a 7 foot tall statue of his wife or re-recording his favorite pop song. Of course I'm not saying that sort of thing is right or 'moral', but you can see Zuckerberg's point of view from his actions.

For that matter even Moskovitz sometimes gets the Zuckerberg treatment on this forum itself, and can't just be a donor giving hundreds of millions of dollars a year. Instead the guy has to politely tiptoe around people's sentiments while worrying about risks to both EA and his day job as CEO of another multi-billion dollar company. So I wonder if EA is really appealing as a philanthropic choice to other billionaires.

Basically I think EA should become a boring thing like the YMCA, to be more impactful. Philanthropic interest and funding tends to be a feast-or-famine thing (hah) for some reason. So I think EA needs to be popular first, and then the billionaire funding may very well follow afterwards.

I agree with the general point that Zuckerberg is too committed to being Facebook boss to give much of his stock away now, but he and his wife put $2b in Facebook shares into his own foundation, which isn't particularly EA inclined (either explicitly or broadly). That's less than Moskovitz-Tuna from a bigger chunk of wealth but it's non-trivial, and certainly enough to show he's not taking his most of cues from them.

I don't consider this to be any sort of failing on Dustin's part (I don't expect my bosses to listen to my donation philosophy if they 100x their current net worth either, even though they definitely have some points of agreement with me and trust my judgement on some things) and think the more salient question is "why have so few people that are not Mark Zuckerberg but are also vaguely in the orbit of EAers donated to EA causes compared with other causes"

As for SBF, his "Future Fund" was less than FTX committed to stadium sponsorship, so I don't think the desire to top that up can be blamed for his recklessness (even if the broader conceit that everything he did was for the greater good was). It's absolutely possible to give significant amounts to philanthropic causes (EA or otherwise) and retain control of a business without being Sam.

I think you're asking the right question. I've tried to answer that in my reply to Trappist here.

Personal take - it feels to me MZ is more about profits (given recent events as well), and if something does not give back a monetary return, I doubt it would attract him

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f