TylerMaule

Trader
Working (6-15 years of experience)
613Bethnal Green, London, UKJoined Mar 2021

Bio

How I can help others

Happy to chat about my experience in quant trading, living in Chicago/London

Posts
4

Sorted by New

Comments
57

Some of the comments here are suggesting that there is in fact tension between promoting donations and direct work. The implication seems to be that while donations are highly effective in absolute terms, we should intentionally downplay this fact for fear that too many people might 'settle' for earning to give.

Personally, I would much rather employ honest messaging and allow people to assess the tradeoffs for their individual situation. I also think it's important to bear in mind that downplaying cuts both ways—as Michael points out, the meme that direct work is overwhelmingly effective has done harm.

There may be some who 'settle' for earning to give when direct work could have been more impactful, and there may be some who take away that donations are trivial and do neither. Obviously I would expect the former to be hugely overrepresented on the EA Forum.

For each person in a leadership role, there’s typically a need for at least several people in the more junior versions of these roles or supporting positions — e.g. research assistants, operations specialists, marketers, ML engineers,...I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million per year

 

If this is true, why not spend way more on recruiting and wages? It's surprising to me that the upper bound could be so much larger than equivalent salary in the for-profit sector.

I might be missing something, but it seems to me the basic implication of the funding overhang is that EA should convert more of its money into 'talent' (via Meta spending or just paying more).

I second these suggestions. To get more specific re cause areas:

  • Each source uses a different naming convention (and some sources are just blank)
  • I'd suggest renaming that column 'labels' and instead mapping to just a few broadly defined buckets which add up to 100%—I've already done much of that mapping here

Borrowing money if short timelines seems reasonable but, as others have said, I'm not at all convinced that betting on long-term interest rates is the right move. In part for this reason, I don't think we should read financial markets as asserting much at all about AI timelines. A couple of more specific points:

Remember: if real interest rates are wrong, all financial assets are mispriced. If real interest rates “should” rise three percentage points or more, that is easily hundreds of billions of dollars worth of revaluations. It is unlikely that sharp market participants are leaving billions of dollars on the table.

(a) The trade you're suggesting could take decades to pay off, and in the meantime might incur significant drawdown. It's not at all clear that this would be a prudent use of capital for 'sharp money'.

(b) Even if we suppose that sharps want to bet on this, that bet would be a fraction of their capital, which in turn is a fraction of the total capital in financial markets. If all of the world's financial assets are mispriced, as you say, why should we expect this to make a dent?

There are notable examples of markets seeming to be eerily good at forecasting hard-to-anticipate events:

Setting aside that the examples given are inapposite[1], surely there are plenty in both directions? To pick just one notable counterexample: The S&P 500 broke new all-time highs in mid-Feb 2020, only to crash 32% the following month, then rise 70% over the following year. So markets did a very poor job of forecasting COVID, as well as the subsequent response, on a time horizon of just a few months!

  1. ^

    Both of these were in rapid response to recent major events (albeit ahead of common wisdom), as opposed to an abstract prediction years in the future

TylerMaule3mo1113

I'm definitely not suggesting a 98% chance of zero, but I do expect the 98% rejected to fare much worse than the 2% accepted on average, yes. The data as well as your interpretation show steeply declining returns even within that top 2%.

I don't think I implied anything in particular about the qualification level of the average EA. I'm just noting that, given the skewedness of this data, there's an important difference between just clearing the YC bar and being representative of that central estimate.

A couple of nitpicky things, which I don't think change the bottom line, and have opposing sign in any case:

  1. In most cases, quite a bit of work has gone in prior to starting the YC program (perhaps about a year on average?) This might reduce the yearly value by 10-20%
  2. I think the 12% SP500 return cited is the arithmetic average of yearly returns. The geometric average, i.e. the realized rate of return should be more like 10.4%
TylerMaule3mo2420

I worry that this presents the case for entrepreneurship as much stronger than it is[1]

  1. The sample here is companies that went through Y-Combinator, which has a 2% acceptance rate[2]
  2. As stated in the post, roughly all of the value comes from the top 8% of these companies
  3. To take it one step further, 25% of the total valuation comes from the top 0.1%, i.e. the top 5 companies (incl. Stripe & Instacart)

So at best, if a founder is accepted into YC, and talented enough to have the same odds of success as a random prior YC founder, $4M/yr might be a reasonable estimate of the EV from that point. But I guess my model is more like Stripe and Instacart had great product market fit and talented founders, and this can make a marginal YC startup look much more valuable than it is.

  1. ^

    I know you're not explicitly saying that the EV of quitting one's job to start a company is $4M/yr, but I think it's worth spelling out more explicitly how far removed this reference class is from that hypothetical.

  2. ^

    The post does allude to this, but I think it's worth flagging more explicitly.

Yeah I think we're on the same page, my point is just that it only takes a single digit multiple to swamp that consideration, and my model is that charities aren't usually that close. For example, GiveWell thinks its top charities are ~8x GiveDirectly, so taken at face value a match that displaces 1:1 from GiveDirectly would be 88% as good as a 'pure counterfactual'

  1. Most matches are of the free-for-all variety, meaning the funds will definitely go to some charity, just a question of who gets there first (e.g. Facebook & Every.org). While this might sound like a significant qualifier, it's almost as good as a pure counterfactual unless you believe that all nonprofits are ~equally effective.
    1. The 'worst case' is a matching pool restricted to one specific org, where presumably the funds will go there regardless, and doesn't really add anything to your donation.
  2. Conversely, as Lizka noted, even the best counterfactual only makes sense in theory if the recipient org is at least half as effective as the best charity you know of.
  3. I'm not sure I fully understand the last question. It sounds like you're referring to a matching pool specific to one charity, in which case no downside, but could be quite different if the pool covers a wider array of nonprofits.

There are still funds remaining, but it looks like each person can only set up three matched donations

Load more