An update in favor of trying to make tens of billions of dollars

by Mathieu Putz16 min read14th Oct 202144 comments

131

Earning to giveEntrepreneurshipSam Bankman-FriedCareer choice
Frontpage

Key take-aways

  • Sam Bankman-Fried was able to make over $20 billion in just 4 years.
  • By staying in quant trading, he’d have made several orders of magnitude less money over that period.
  • Presumably, (way?) fewer than 100 EAs tried something similar.
  • If that’s our base rate, then maybe more EAs should try to found multi-billion dollar companies.

Note on this post:

This is my first EA Forum post. One stumbling block in the past has been to have very high expectations toward potential post ideas and then not write them up at all. So to counteract that, here I've tried writing up my thoughts on one specific thing without having as broad a scope as might be ideal.

Who am I talking to?

This post focuses on the expected value of founding a start-up. Of course, there are many reasons not to found a start-up other than beliefs about the expected value. To start with, it requires high levels of sacrifice and probably the most likely outcome is failure. Furthermore, the qualities required to make a good founder are quite rare and if you don’t have them, that’s no reason to feel bad! In fact, there’s probably lots of other ways you can do a ton of good!

So here I will focus on the people who think they might be a good fit and who would consider founding a start-up if they were convinced it was the highest-impact thing for them to do.

This is only one way to look at this

There are several approaches one could take to this question and this is only one of them. That’s why the post is titled “An update in favor of trying to ...” and not “You should probably try to …” (although that could still be true!). The most important factor is probably your inside view. But there are also other outside views! See these two posts from Brian Tomasik and Applied Divinity Studies for instance[1]. They both consider the success rate of companies that got into YCombinator among other things. A paragraph I particularly liked from the latter post was this:

I understand that the odds of becoming a billionaire are low, but it doesn’t matter if you only consider the conditional probabilities. What’s the cost/benefit of taking 6 months off your day job to work on a startup? Conditional on being successful there, what’s the cost/benefit of trying very hard to seek out venture capital? Given that you’ve raised money, what’s the cost/benefit of trying to make a billion dollars? The bet sounds insane to begin with, but at each step you’re taking on a very reasonable level of risk.

The argument

Sam Bankman-Fried is an example of a person motivated by EA to earn as much as he could in order to give it all away. It’s going pretty well. His net worth is estimated to be around $22.5 billion and he’s the world’s richest person under thirty. It took him about 4 years (!) to build most of that wealth, from founding Alameda Research in October 2017 to today.

Lincoln Quirk, co-founder of Wave, which is now valued at $1.7B, is also worth mentioning here. He too has been interested in EA for a long time. I think for the purposes of this post though, it’s fine to focus on Sam. Reasoning in this footnote[2].

Edit 18/10/2021: Stefan Schubert points out in the comments that it's still worth considering Wave as a datapoint confirming that Sam Bankman-Fried wasn't a total one-off. I agree!

Can we conclude anything from Sam’s story? I think so! Specifically, I wanna lay out an argument that on current margins, more people should take bets to make tens of billions of dollars. That is, if you are in a reference class similar to Sam’s before founding Alameda Research and you are currently considering founding a start-up, then this post aims to cause you to update in favor of that.

Let’s first ask how many EAs tried to found companies that would potentially be worth billions. Let’s call this number N.

David Moss estimates there are about 2300 highly engaged EAs. A little under 40% of them are earning to give. How many of those are trying to found multi-billion dollar companies? I don’t know. But founding a multi-billion dollar company probably demands sacrifices larger than even what most “highly engaged EAs” are doing. Based purely on personal impressions, even 10% of that 40% would seem surprisingly high.[3] So that would give us a not-very-confident upper bound of N = 10% * 40% * 2300 = 92; let’s make it 100. Keep in mind though that for all I know this number could be as low as 2, just Sam and Lincoln. Another person that might count would be Sam’s cofounder Gary Wang. Also maybe Emerson Spartz? I expect I’m missing a few.

So say N=100 and for the time being let’s naively assume these people were all exactly identical to Sam prior to founding Alameda Research and then the dice were thrown and everyone except real Sam failed. Then the expected value of setting out to found a multi-billion dollar company for this class of people would roughly be $22.5 billion / 100 = $225 million.That’s huge! And if N was much smaller, say 10, then we get $22.5 billion / 10 = $2.25 billion!

How much could he have donated by staying at Jane Street? I don’t know, but optimistically something like 10 million a year over those same 4 years? (He seems to be a very good trader and this 80,000 hours podcast episode suggests earnings can go that high.) That would make $40 million. That’s over 5 times lower than the conservative estimate above and 50 times lower than the aggressive one. For any N<560, founding a start-up was higher expected value. And keep in mind that I’m using a very optimistic estimate for his earnings at Jane Street. $10 million total might be more realistic.

A lot of bad assumptions have gone into this. Other EAs are in fact not identical copies of pre-success Sam. But the difference is so big (5x to 200x) that I think we’re still on to something.

What do we make of those numbers? Here’s one way to think about it.

Think of people as being on a spectrum from “very likely to succeed at building a billion-dollar company” to “very unlikely to succeed at this”. If you’re sufficiently high on that continuum, you should do it.

So now there’s two scenarios.

One is that it was obvious a priori that Sam was very high on this spectrum, but that there aren’t many other EAs anywhere near as high. In this world everything is right the way it is. A marginal person going into entrepreneurship wouldn’t have anything near even just Sam’s a priori expected gains.

The other scenario is that there’s actually a bunch of EAs who are within one factor of ten from Sam in terms of a priori likelihood of succeeding. And they just don’t go for it. If we’re in this world, those marginal people are currently making a huge mistake! Their expected impact might not be as high as Sam’s expected impact was a priori, but it might still be their best option. Many more should try entrepreneurship.

So which of these worlds do we live in? I don’t know, but my guess would be that it’s close enough to the second for it to be true that on current margins, more people should try start-up entrepreneurship. The difference above was pretty big.

What does it mean to be in the same reference class?

I don’t know all the details, but from what I’ve read and heard in interviews, here are a few relevant points about Sam:

  • He studied Physics at MIT.

  • He decided to do earning to give and was able to get hired by Jane Street.

  • He liked his time there.

  • What motivated him to leave Jane Street was that he felt he could probably find higher expected value options.

  • At least from founding his own companies onward, he’s famously demonstrated a very strong work ethic. He claims to do almost nothing but work and gets most of his sleep in a beanbag at the office, so his colleagues can wake him up if they need something and so he can stay focused. He’s also talked about having experimented with a bunch of ADHD drugs like Adderall or Modafinil to improve productivity. (Source: last few minutes of this interview.) (I don’t want to endorse these meds here. I don’t have a strong opinion either way yet, but it seems relevant. I’m also not saying that you should feel bad about yourself if you’re not willing to work as hard as Sam; almost no one is, including the vast majority of EAs. But it would be dishonest to leave that part out here.)

  • He hadn’t done anything very entrepreneurial prior to founding Alameda at around age 25 (with one small exception[4]).

  • He’s motivated by EA; what drives him is to make money so he can give it all away.

  • This seems to make him more willing to take high-variance bets than other entrepreneurs. (E.g. about founding FTX he has said that at the time they thought it was high expected value, but also 80% likely to fail).

  • He doesn’t try to do good directly through his companies, saying it’s likely better to optimize either income or direct impact, but not both. However, he says it does matter to him that the direct impact of his work be net positive, even if small.

  • Edit 18/10/2021: Leon Lang adds in the comments that his parents were both Stanford Professors.

Importantly this is not a checklist with boxes you have to fill! Rather it’s just meant to give you a feel for how you could have looked at this person prior to knowing they would turn out successful. Maybe listening to a bunch of interviews could also help with that. Here’s my top recommendation, a recent fireside chat he gave at Stanford EA.

And if upon hearing this without knowing how it turned out you’d estimate odds of success within one factor of 10 from what you would guess for a person more like you, you should go for it! At least that’s what this argument suggests.The point is somewhat weakened by the fact that Sam obviously knew a lot more about himself than just the list above.

One thing I want to highlight, that presumably puts him in a reference class closer to yours, is that he set out on this journey with the explicit goal of earning to give. I think that’s a notable difference and it motivated me to write this post. With other entrepreneurs who think about donations only after the fact it’s harder to find an upper bound on how many tried something similar.

You’d also expect that class of people to be more risk-averse, since altruistic returns to money are near-linear on relevant scales at least according to some worldviews, while selfish returns are sharply diminishing (perhaps logarithmic?).

Edit 18/10/2021: Tsunayoshi points out in the comments that Venture Capitalists can sometimes exert pressure such that "startup founders are often forced to aim for 1B+ companies because they lost control of the board, even if they themselves would prefer the higher chances of success with a <1B company". Read his full comment below. This weakens the argument above, though I expect a significant amount of risk-aversion to remain.

Why did I put “tens of” billions in the title? Isn’t one billion ambitious enough?

If you manage to make a billion dollars and give it all away, amazing!

Ex ante, I think it’s worth stressing that returns to money are near-linear according to many worldviews. Another way to say this is that altruistic returns to more money are only diminishing quite slowly and you should be nearly risk neutral on relevant scales[5].

If you buy that, then it seems to me that you should be shooting for the stars, since I would guess your chances of making ten billion dollars are more than 10% of your chances of making one billion if you’re actually trying. I’m not sure about this though.

By contrast, selfish returns to money are sharply diminishing. That’s why risk-neutrality often feels counter-intuitive! It’s strange to value making $100M one hundred times less than making $10B, but maybe you should. If you think so, are you actually acting accordingly?

I should emphasize though that how linear this stuff really is depends a lot on your worldview and other empirical beliefs and I can definitely understand if people feel returns are diminishing quickly enough for this argument not to go through. In that case, try to make $100M maybe?

Possible objections

Am I just trying to make the reference class as small as possible to then conclude high odds of success? Doesn’t that always apply? E.g. wouldn’t anyone “sufficiently like Elon Musk” have huge odds of success too?

There is something to this. Picking an appropriate reference class is partly an art.

I think “all EAs trying to become absurdly rich” is among the most relevant reference classes for an EA to look at even before hearing about Sam (here’s one example where Applied Divinity Studies kind of does that, though it’s about “rationalists”, not EAs). Then you learn about Sam and update in favor.

Of course, there’s also value in looking at other cases and other reference classes. I already mentioned these two posts from Brian Tomasik and Applied Divinity Studies.

Isn’t the inside view way more important than the outside view for questions like these? E.g. Ben Kuhn has made arguments along these lines.

I broadly agree with that actually. Still I think there’s value in looking at outside views.

First, if on the inside view you’re on the verge between trying to found a start-up or not, this outside view might just be enough to push you in favor of going for it.

But more importantly, whether you stumble across opportunities or ideas to found a start-up isn’t completely random. It’s good to know whether you should be willing to invest time into coming up with such ideas, doing some initial research or prototyping, meeting potential co-founders etc. Maybe to Ben the answer is just “obviously yes” and beyond that the outside view is worthless. To me at least this isn’t obvious a priori.

Isn’t this still subject to all sorts of selection effects?

I’ve tried to account for the most obvious selection effect by asking how many EAs tried something similar and failed.

There’s always some selection effects remaining though and it’s worth stating again that you shouldn’t overfit to Sam’s particular story.

For example, one selection effect that could be going on is that of all the groups I’m a part of, I chose the one that contained Sam Bankman-Fried (Effective Altruism) and ignored all the other ones where I didn’t know about similar success stories. It does seem to me though that EA is the one group that I’m most “a part of”, so it’s not that arbitrary.

The mere fact that Sam is the world’s richest person under thirty also suggests that maybe this is much harder than the analysis above made it seem. It is true that a subset of EAs are among a rare class of people who value money near-linearly on relevant scales. Hence we should expect them to be disproportionately likely to achieve such outlier-level success. But this fact should still make us suspicious.

There are diminishing returns to more money. Doing this now is actually less valuable than it was in Sam’s case.

That’s true! How big a factor it is depends a lot on what your favorite cause area and/or intervention is and how much money that can absorb. Of course, it also matters whether you’ll be donating to the same things as Sam!

He hasn’t yet decided where the bulk of the money should go it seems. But he has said that he’s basically convinced that most of the expected value lies in the future. A somewhat unusual thing for EAs, is that he considers political donations a promising opportunity.

More discussion on diminishing returns above .

Hasn’t earning to give been un-recommended?

My impression is that it’s complicated and people disagree about this. It is true that 80,000 hours have de-emphasized this path, though I think they still recommend it for certain people. Notably for this post, Sam Bankman-Fried has expressed an impression that it is now under-emphasized. Make up your own mind about this. Certainly there’s some amount of money where most people would agree it’s better than whatever direct work you would have done. Again, this depends a lot on your favorite cause area.

So you’re basing your whole analysis on this one case?!

This isn’t supposed to be a full-blown analysis of the EV of founding a start-up. It’s just meant to provide one argument that you may not have considered before.

Also, if it’s true that impact is a fat-tailed top-heavy distribution, a lot of the expected value is in outlier cases. Certainly, this seems to be the case when it comes to donations. It’s less naive than it may seem then to focus your analysis on these outliers.


I’m very uncertain about how valuable it is for me to write such posts relative to other things I could be doing. If this caused you to change your mind, it would be great if you let me know in the comments below. I don’t expect this to happen, but if it contributed to you changing career plans, absolutely let me know!


Thanks a lot to Aaron Gertler for reading a draft of this post and providing valuable feedback! See this generous offer.


  1. Readers may also find this recent forum post on joining an early stage start-up interesting. ↩︎

  2. I don’t know Lincoln’s net worth, but it’s probably not much more than 1% of Sam’s? (Sam owns an unusually large share of FTX). Lincoln has taken the Founders Pledge, though I don’t know what percentage he’s pledged to give (minimum 5%, but possibly up to 100%). That’s still a lot of money donated! But it’s also small enough to not change my calculation much, so I just left it out. I think this just illustrates how incredibly top-heavy these things are. Almost all the expected value is in the very very best outcomes.

    Furthermore, I suspect most of Lincoln’s impact flows through the amazing direct work Wave are doing! (At least that’s probably true from a global health and development worldview. I’m not sure about this though, comments welcome.) Ben Kuhn’s post Why and how to start a for-profit company serving emerging markets may be of interest. By contrast, here I focus on “pure” earning to give, without regards to direct impact, other than it not causing harm. I’m not claiming that one is better than the other. (I think this argument would depend a lot on your favorite cause area among other things).

    Edit 18/10/2021: In his comment on this post, Lincoln mentioned this "my pledge is 10%, although I expect more like 50-75% to go to useful world-improving things but don't want to pledge it because then I'm constrained by what other people think is effective." ↩︎

  3. If there exists data on this that I’m unaware of, I would be grateful for any pointers! And if your personal impression is that my guess is off, also let me know. ↩︎

  4. At some point in high school, “he organized and largely wrote a puzzle hunt in which teams from local schools could compete”. Source. ↩︎

  5. This isn’t necessarily true. E.g. maybe you think AI risk is the only thing that matters, theoretical work on this is the only way to make progress, and marginal researchers wouldn’t be able to add much, since the best people are already getting funded (I’m not saying this is true!). Then returns to money are quickly diminishing and you should be quite risk-averse. If on the other hand you think GiveWell type charities are the way to go, then there’s room for a lot more money and returns diminish much more slowly. Biorisk also seems to be able to absorb lots of money. Further discussion is beyond the scope of this post, but it’s worth thinking about this if you’re earning to give. ↩︎

131

43 comments, sorted by Highlighting new comments since Today at 1:48 PM
New Comment

I basically agree with the core point.

I think recent events have been an update in favour of people in effective altruism being super talented, which means we should aim at the very top.

I also think I agree with the arguments that lower risk-aversion mean we should aim higher.

I wonder if these arguments especially bite at the ~$1bn+ project level i.e. there are a lot of startup founders aiming to found a unicorn and make $100m for themselves. But there's very little personal benefit in going from, say $10bn company to $100bn.

My main push back is that I'm not sure people should be aiming to become billionaires, given the funding situation. I'd prefer to see people aim at the top in other paths e.g. winning a nobel prize, becoming president, founding a 'megaproject' non-profit etc.

(Though, it seems like the distribution of wealth might be one of the most heavy-tailed, so the rewards of aiming high there might be better than other paths, and EAs seem perhaps unusually good at earning money.)

 

PS here are two threads from Sam on this topic:

https://twitter.com/sbf_ftx/status/1337250686870831107

https://twitter.com/sbf_ftx/status/1337904412149182464

Yes, I think it's stronger evidence of EAs being good at making a lot of money (or of it being easier than expected to make a lot of money) than of EAs being super talented in general (though it's some evidence of that as well).

It's definitely stronger evidence of that :) Though I've also noticed EAs advancing ahead of my expectations in other areas, like government.

Yeah. Could be good to study EA success in different areas more systematically, as we get more empirical data. 

I think the update is less about attempting to become a multi-billionaire vs direct work, and more about attempting to become a multi-billionaire over other E2G work.

I think $20B in 4y is somewhat of an outlier, even among super successful billionaire founders. Eg a few quick googles (assuming CEO has something like 10% of the company after several rounds of dilution)

  • Facebook founded 2004, 2008 valuation $15B, CEO $2B?
  • Airbnb founded 2008, 2012 valuation $2B, CEO $200M?
  • Uber founded 2009, 2013 valuation $3.5B, CEO $350M?
  • Stripe founded 2009, 2013 valuation $2B, CEO $200M?
  • Aurora founded 2017, 2021 valuation $13B, CEO has something like $4B
  • Tesla founded before Elon, but Elon CEO 2008, 2012 valuation $3.5B, CEO $400M?
  • I considered adding Google/Microsoft/Amazon/Apple to this list, but since valuations have increased a lot over time and they were started much earlier, they look less impressive, and it doesn’t feel quite fair.

Caveats:

  • perhaps there are many more quick crypto billionaires in particular, given how much crypto has increased over the last 4y
  • valuations have increased over time, and maybe people have examples of other tech founders of companies started closer to 2017 where the founders did better
  • several of the CEO’s on this list have gotten >$10B, it just took more like 10 years than 4

Definitely - but that could make the point even stronger. If it's such an outlier, maybe that means it's become easier to do something like this, which is an update in favour of trying.

In line with the above analysis, I vaguely recall seeing one news source refer to Sam's fortune as "one of the fastest accumulations of wealth in history", or words to that effect. 

EDIT: Here's the article, from Yahoo Finance: "[SBF's] sudden prosperity appears to constitute one of the fastest accumulations of self-made wealth in history."

Hi! It's neat to be mentioned!

My motivation was not, and has never been about the money. I think it would have been too easy to be distracted by early <$5m acquihire opportunities, if I were looking to get paid a bunch of money. I realize that is not the point you are making (and that shooting for the moon might be worth it if you are motivated by money) but I do think a lot of people might see $1B and think "gosh, that sounds hard, maybe I could do $1m" and the answer is you can, sorta, but it is dumb and not worth it to try.

My motivations were more about the, um, "glory" -- I always wanted to build something big, I thought it was possible when few others did, and I also thought it would be really fun to try. I didn't perceive starting a company as "risky" at all, since I was otherwise a successful software developer, plus my parents were rich enough to support me. It might be harder to dive into building something big if you don't have that kind of support. I did also have altruistic motivations, although they were fairly weak in the first few years - they've grown a lot since!

Things I learned:

  • My first 2 years working on startups were working on failed ideas, but I learned an enormous amount about product, culture, teamwork and even coding which translated directly into early effectiveness once we hit on an idea that worked. Don't be afraid to pivot.
  • When we reached product market fit in 2014 I wrote this post about startup skills, which might be useful for others (note: is quite old now, don't endorse everything etc) https://www.lincolnquirk.com/projects/2014/06/02/projects_startup-skills-checklis.html

And my pledge is 10%, although I expect more like 50-75% to go to useful world-improving things but don't want to pledge it because then I'm constrained by what other people think is effective.

Hello! If you have time for questions:

My motivation was not, and has never been about the money.

...I do think a lot of people might see $1B and think "gosh, that sounds hard, maybe I could do $1m" and the answer is you can, sorta, but it is dumb and not worth it to try.

I always wanted to build something big, I thought it was possible when few others did, and I also thought it would be really fun to try.

To clarify, are you saying you expect that a mindset for getting money has drawbacks, for example it might promote patterns where ultimately people rationalize small, marginal projects for $1M.

Instead, maybe a useful alternative is to get genuinely interested in building something big and awesome, so a product mindset helps?

When we reached product market fit in 2014 I wrote this post about startup skills, which might be useful for others (note: is quite old now, don't endorse everything etc)

Would you briefly describe a few things from that article that you wouldn't endorse now?

More questions that maybe aren't really EA Forum related but relevant to the premise of the post:

  • Was there some feeling or realization that caused you to think you were more likely to be successful at entrepreneurship than others? For example, maybe became aware you were a much better coder, understood products better than others. (I think I want to understand what signals people can use to tell if they are likely to be successful).
  • I think I know you are in product, but when you started realizing Wave was really truly successful and was scaling, was there anything you found personally surprising in the business?  (E.g. did you feel like "being nice" worked well and scaled well, any tough decisions you did not expect, or things that were easier than you expected?)

I expect more like 50-75% to go to useful world-improving things but don't want to pledge it because then I'm constrained by what other people think is effective.

  • (You might just be speaking literally and are being prudent) but do you have some area or projects that might be interesting for people to know about?
  • Finally, I'm not sure how business development goes down on the Forum, but under what circumstances would you considering mentoring or giving feedback on a deck?

a mindset for getting money has drawbacks, for example it might promote patterns where ultimately people rationalize small, marginal projects for $1M.; Instead, maybe a useful alternative is to get genuinely interested in building something big and awesome, so a product mindset helps?

Yes. (Sorry for the bad writing, it was late and I was tired.)

I think the best entrepreneurs get a bit of a boost in motivation from the idea of becoming rich, but the more "rich"-oriented you are, the less likely it is that you will make billions of dollars.

Was there some feeling or realization that caused you to think you were more likely to be successful at entrepreneurship than others?

Hmm, I wanted to be an entrepreneur from the moment I understood it was possible. I don't think this is a necessary condition for success, but I think it gives a lot of energy towards trying (and especially trying multiple times if you fail). A small project I've had for the last 5+ years is figuring out who among my friends should be entrepreneurs, and trying to inject a mind-virus to get them to actually do it. My instinct is that you should be motivated by "impact" of some kind (it's ok if it's not purely altruistic); willing to work hard. You should be good at something too, but I think that can come with time if you are sufficiently motivated. In my case I was both good at coding and good at self-improvement, these things definitely compounded.

do you have some area or projects that might be interesting for people to know about?

Not as such, nothing to announce right now

under what circumstances would you considering mentoring or giving feedback on a deck

I don't really like reviewing decks. I'm generally happy to answer a few questions/give entrepreneurship advice over email; my email is pretty easy to find!

Thanks for your comment! Super interesting to hear all that.

And my pledge is 10%, although I expect more like 50-75% to go to useful world-improving things but don't want to pledge it because then I'm constrained by what other people think is effective.

Amazing! Glory to you :) I've added this to the post.

This seems pretty plausible to me. One thing I would note is that I think for many EAs who would seriously consider becoming a billion-dollar startup founder at all, their ex ante odds won't look as good as SBF's did in 2017 before founding Alameda (now FTX). 

A second point I will note is that on the other hand, their classical earning-to-give odds are also much worse. In particular, I think for a lot of founder profiles (e.g), their lower-variance earning-to-give alternatives look more like "mid-level software manager making ~mid- to high- six figures", not "high seven or even eight figures in trading." 

I agree Alameda seemed like an unusually good opportunity at the time.

His parents are Stanford law professors. I think this is a more impressive fact than the other items one could have observed about Sam prior to becoming extremely successful.

Thanks for pointing that out! I agree it's notable and have added it to the list. I don't have a strong opinion on how important this is relative to other things on there.

 

You’d also expect that class of people to be more risk-averse, since altruistic returns to money are near-linear on relevant scales at least according to some worldviews, while selfish returns are sharply diminishing (perhaps logarithmic?).

 

It's been a while since I have delved into the topic, so take this with a grain of salt: 

Because of the heavy influence of VCs who follow a hits-based model, startup founders are often forced to aim for 1B+ companies because they lost control of the board, even if they themselves would prefer the higher chances of success with a <1B company.  That is to say, there are more people and startups going for the (close to) linear  utility curve than you would expect based on founders' motivations alone. How strong that effect is  I  cannot say. 

This conflict appears well known, see here for a serious treatment and here for a more humorous  one

Thanks for pointing this out! Hadn't known about this, though it totally makes sense in retrospect that markets would find some way of partially cancelling that inefficiency. I've added an edit to the post.

This is an objectively good post, it is an objectively good idea, but the texture and content of some of the comments surprise me.

Like, some thoughts on the negative side:

  1. There isn't much signs of models/awareness of the difference it takes to get a 10M valuation and a 10B (or even 1B valuation), and it’s the latter two that drives the post. This is a big deal because the nature/cause of success is probably totally different between these levels and that insanely smart, successful people might be capped at lower valuations. I suspect some comments here (that would update onlookers toward the billionaire idea) lack these models and what their implications are. 
     
  2. It's first approximation good if indeed EA had some program that could bring say, 1/1000 of incubatees to this level of success. But the costs would be really high: it would consume 1000 potential leaders in expectation and many of these people would be hurt. The skill set is different, maybe even net negative for non-profits, because of zero sum, sharp elbows sort of work required.
     
  3. It's worth considering whether the underlying sentiment which drives posts like “Rejection” and “Very hard to find a jobare driven by realities that may not be a defect, but just the other side of the coin to worldviews/models of talent that are common in a “hits based model” (which itself might be an overly generous characterization). People don’t talk about the Chesterton fence: growing talent is hard not because people are snobby but because of things like founder effects and quality/fit is deceptively hard/important and impractical to communicate to people who don’t have it. Yet, this doesn’t even scratch the surface—I’ve seen leaders in plant based foods after an exit describe distrusting/managing out virtuous early employees after raises/growth since they no longer matched the calibre they needed. 

This was written really quickly and I stopped writing here because it’s unclear there’s any demand for this comment. But I think this comment is an update toward normal, mainstream thought about this.
 

On the positive side:

  1. It seems like there is a capability to build or support EAs in some program or informal process, because of the tech bent, shared worldview and connections. This might be a large legit advantage over other incubators.
  2. I really agree EAs tend to be more conscientious and able. (However the pool of EA may change rapidly if it’s cheap to do so, and then you're back to gatekeeping again).
  3. Even or especially if you fully agree with a critique of the tech sector that says they are basically reinventing oligopoly and regulatory capture, this seems like a strong positive reason to “give EAs” these companies/slots..
     

Edit on Saturday, October 16, 2021: removed “Ummm, what?”, as per irving's comment.

The rest of this comment is interesting, but opening with “Ummm, what?” seems bad, especially since it takes careful reading to know what you are specifically objecting to.

Edit: Thanks for fixing!

I wonder what a reasonable cost-effectiveness/ROI estimate is for funding EAs trying to make billion-dollar startups.

Depends immensely on if you think there are EAs who could start billion-dollar companies, but would not be able to without EA funding. I.e. they're great founders, but can't raise money from VCs. Despite a lot of hand-wringing over the years about the ineffectiveness of VCs, I generally think being able to raise seed money is a decent and reasonable test, and not arbitrary gatekeeping. The upshot being, I don't think think EAs should try to start a seed fund.

You could argue that it would be worth it, solely for the sake of getting equity in very valuable companies. But at that point you're just trying to compete with VCs directly, and it's not clear that EAs have a comparative advantage.

Depends immensely on if you think there are EAs who could start billion-dollar companies, but would not be able to without EA funding. I.e. they're great founders, but can't raise money from VCs.

 

I think the core argument here is that not enough EAs try to start a company, as opposed to try and are rejected by VCs. IMO the point of seeding would be to take more swings.

Also, presumably the bar should be lower for an EA VC, because much of the founders' stake will also go to effective charity.

One possibility is that EAs are better than it might seem at first glance. The fact that there is some track-record of EA start-up success (as per the OP) may be some evidence of that.

If that is the case, then VCs may underestimate EA start-ups even if VCs are generally decent - and EA companies may also be a good investment (cf. your second paragraph).

I guess a relevant factor here is to what extent successful EA start-ups have been funded by EA vs non-EA sources.

This is my personal view, I understand that it might not be rigorously argued enough to be compelling to others, but I'm fairly confident it in anyway:

I literally believe that there are ~0 companies which would have been valued at $10b or more, but which do not exist because they were unable to raise seed funding.

You will often hear stories from founders who had a great idea, but the VCs were just too close minded. I don't believe these. I think a founder who's unable to raise seed money is simply not formidable (as described here), and will not be able to create a successful company.

This is particularly true right now when seed money is extremely available. If you're unable to fundraise, something has gone wrong, and you probably should not be starting the company.

The strongest objection is that ~0 is not 0, and so we should create an EA VC even if the odds are really bad. I'm not that convinced, but it's possible this is correct.

You don't have to believe that VCs are generally irrational in order to believe that an EA VC could be a good idea. I think arguing against the claim that VCs are generally irrational is akin to a weak man argument.

People presumably start successful venture capitalist firms, e.g. based on niche competencies or niche insights, now and then. It's not the case that new venture capital firms never succeed. And to determine whether an EA venture capital firm could succeed, you'd have to look into the nitty-gritty details, rather than raising general considerations.

Also, in a sense VC is just another industry, alongside, e.g. crypto, remittances, etc. The premise of the OP is that EAs companies could make become very successful. And if so those could a priori include VCs. You'd need some special argument as for why EA VCs are less likely to succeed than EA companies in some other industries.

The premise of the OP is that EAs companies could make become very successful. And if so those could a priori include VCs. You'd need some special argument as for why EA VCs are less likely to succeed than EA companies in some other industries.

I can think of at least two:

  1. The premise of the OP is that EAs can make very successful companies in at least some sector. But successful companies in some sense arose out of exploiting a market inefficiency. Assuming we concede the argument in the OP that EAs can make very successful companies in some sector, when we consider the space of all possible sectors,  we should a) not be surprised there exists a sector with large market inefficiencies that EAs can exploit but also b) the burden of proof is on the proponent of large market inefficiencies in any given sector to explain why it is so.
  2. I think the more liquid a market is, the more I'd expect the market to be efficient. So I'd expect it to be very hard for EAs to make a better Jane Street, for example. Now VC firms are noticeably less efficient than quant finance, but probably still more efficient than many traditional industries. So the disruptive potential is less clear.

Despite those reservations, I'm still personally bullish on an EA VC fund, and would seriously consider investing 6 figures of my own money* in seed funding for an EA VC or angel investor fund if the managers are people I trust or trusted by people I trust, or have other strong evidence of potential success.

*which is a lot of money for me but not for a VC, of course.

Despite those reservations, I'm still personally bullish on an EA VC fund... if the managers are people I trust or trusted by people I trust, or have other strong evidence of potential success.

I think the most interesting point has been mentioned by Stefan Schubert here 1, 2

Expanding on this, there is a strong niche for an EA fund that helps EAs build EA companies. I feel like I’m writing a discount hacker news comment, but maybe there’s key advantages an EA fund can use:

  • Startups often fail from implosion due to drama like founder splits. The strongest teams have great trust and unity (e.g. come from the same school cohort). It’s plausible EA culture provides an alternate source of unity. Many EAs have great communication skills and can credibly show that they can commit to altruistic purposes.
  • It's plausible EAs could commit to much lower salary/equity (which could still be enormous in non-profit terms) and put the large majority into EA causes. This compensation doesn’t hurt EA motivation while reducing a source of tension and maybe adding enormous credibility (this might be impractical on a cap table, I don’t know.)
  • I think EAs are more able to grind and do heads down operational work, which I think is important (see SBF anecdotes about waiting at banks).
  • I think the tech narratives about changing the world are attractive and why funding, grind exist. I can see EAs getting a consistent edge at all stages of a business because they can plausibly change the world (or at least have a convincing meme).

These seem like structural things that a VC fund can be built on.

Startups often fail from implosion due to drama like founder splits. The strongest teams have great trust and unity (e.g. come from the same school cohort). It’s plausible EA culture provides an alternate source of unity

For what it's worth, I think this appears empirically false in terms of pretty much every EA for-profit startup of a fairly large size (say >10 employees) that I'm aware of, including the successful(!) ones. 

Founder or near-founder level drama that has ex ante very negative consequences ("red flags" is maybe my current preferred term) appears to be the norm rather than the exception. (Incidentally this was also true during my own brief stint as an intern in an mission-oriented non-EA startup, which is now valued at 10 or 11 figures) 

Different people can learn different things from these anecdata, but hypotheses I've generated include:

1. Base rates of founder conflict (to a pretty extreme level, like >50% of management quitting) is just really high, and my naive impressions/prior that founder conflict should only be pretty common in failed startups is wrong. 

2. Wittgenstein's ruler: I shouldn't trust my own instincts of what are red flags for startup success (like founder conflict) in the face of pretty strong empirical data. Maybe I'm quite bad at causal assignment here.

3. The world is just crazy/easy. There's so much money laying on the table that you can screw up in many important ways and still come out ahead as long as you do a few other (more?) important things right. 

4. There's just too much randomness/heterogeneity in the world to say much of anything when it comes to startup success.

I've gone up on my belief in 1. personally updated somewhat downwards on 4, and up on both 2 and 3. 

I think the most interesting point (or at least fun to write about on an internet forum) hasn’t been made, and it’s not that EAs can perform better running a general fund, but that there is strong niche for an EA fund that helps EAs build EA companies.


I related to this niche issue in two comments; 1, 2.

Yes, thank you. I think this was useful but I neglected to mention it. 

That's a good clarification, I do agree that EAs should consider becoming VCs in order to make a lot of money. I just don't think they should become VCs in order to enable earn-to-give EA founders.

Alright, but if there were such EA VCs they might want to keep an extra eye on EA start-ups, because of special insider knowledge, mutual trust, etc. Plus EAs may be underestimated, as per above.

I do agree, however, that unpromising EA start-ups shouldn't be funded just because they're EAs.

When working with EA founders, I expect EA funds to have a medium-sized advantage:

  • the EA and rationality communities have shared language which would make communication with founders easier
  • if founders are aligned, the fund could be much less afraid of companies being outright scams
  • EA might have tests for competence that are better than industry standard
  • the fund could guide founders capable of generating lots of direct impact (e.g. Wave) into maximizing profit, maximizing direct impact, or a mix, whereas most VCs are forced to maximize profit
  • EA has effective pool of billions of dollars, and so an EA fund would be capable of taking on much more risk than all but the largest VC firms

That said I'm not in the VC space, so maybe the effects are smaller than I think.

I don't know much about Wave, but to me it seems like a data point, even though smaller (meaning there isn't just one case).

I agree! I've added an edit to the post, referencing your comment.

Would it not make sense to start some sort of 'EA Venture Capital' firm?

Surely more EAs would take this leap if provided with some runway/salary (in exchange for equity, which by this logic would be a phenomenal investment for patient philanthropy money)

I mostly agree with the AI risk worldview described in footnote 5, but this is certainly an interesting analysis! (Although not super-useful for someone in a non-MIT/non-Jane-Street/not-elite-skilled reference class, but I still wonder about the flexibility of that...)

Thanks a lot for saying this!

Yeah, I wonder about the flexibility as well. At least, "I have good reason to think I could've gone to MIT/ Jane Street..." should go a long way (if you're not delusional).

Seems like a strong reason in favour of some kind of EA enterpreneurship accelerator. I don't think EA should be providing huge amounts of funding as successful enterpreneurs should be able to find this elsewhere, but we might be able to provide some valuable training, advice and networking.