University of Oslo/ McKinsey Global Institute @ Economics PhD/ Research fellow
Pursuing a doctoral degree (e.g. PhD)
Working (0-5 years experience)
481Joined Jul 2018



Econ PhD focusing on global priorities research, ex-McKinsey Global Institute fellow. Founder of McKinsey Effective Altruism community and board member of EA Norway. Follow me on Twitter at @jgraabak



I think I'll try and type up my objections in a post rather than a comment - it seems to me that this post is so close to being right that it takes effort to pinpoint the exact place where I disagree, and so I want to take the time to formalize it a bit more.


But in short, I think it's possible to have 1) rational traders, 2) markets that largely function well, and 3) still no 5+ year advance signal of AGI in the markets, without making very weird assumptions. (note: I choose the 5+ year timeline because I think once you get really close to AGI, say, less than 1 year and lots of weird stuff going on, then you'd at least see some turbulence in the markets as folks are getting confused about how to trade in this very strange situation, so I do think the markets are providing some evidence against extremely short timelines)


I see that I wasn't being super clear above. Others in the comments have pointed to what I was trying to say here:

 - The window between when "enough" traders realize that AI is near and when it arrives may be very short, meaning that even in the best case you'll only increase your wealth for a very short time by making this bet

 - It is not clear how markets would respond if most traders started thinking that AI was near. They may focus on other opportunities that they believe are stronger  than to go short interest rates (e.g., they may decide to invest in tech companies), or they may decide to take some vacation

 - In order to get the benefits of the best case above, you need to take on massive interest rate risk, so the downside is potentially much larger than the upside (plus, in the downside case, you're poor for a much longer time)


Therefore, traders may choose not to short interest rates, even if they believe AI is imminent


(a short additional note here: yes some of this is addressed more at length in the post, e.g., in section X re my point 3, but IMO the authors are somewhat too strongly stating their case in those sections. You do not need a Yudkowskian "foom" scenario to happen overnight for the following point to be plausible: "timelines may be short-ish, say ~10 years, but the world will not realize until quite soon before, say 1-3 years, and in the meantime it won't make sense to bet on interest rate movements for most people")


While this is a very valuable post, I don't think the core argument quite holds, for the following reasons:

  1. Markets work well as information aggregation algorithms when it is possible to profit a lot from being the first to realize something (e.g., as portrayed in "The Big Short" about the Financial Crisis).
  2. In this case, there is no way for the first movers to profit big. Sure, you can take your capital out of the market and spend it before the world ends (or everyone becomes super-rich post-singularity), but that's not the same as making a billion bucks. 
  3. You can argue that one could take a short position on interest rates (e.g., in the form of a loan) if you believe that they will rise at some point, but that is a different bet from short timelines - what you're betting on then, is when the world will realize that timelines are short, since that's what it will take before many people choose to pull out of the market, and thus drive interest rates up. It is entirely possible to believe both that timelines are short, and that the world won't realize AI is near for a while yet, in which case you wouldn't do this. Furthermore, counterparty risks tend to get in the way of taking up very big loans, and so they would dominate your cost of capital.
  4. All that said, it is possible that the strategy of "people with a high x-risk estimate should use long-term loans to fund their work" is indeed a feasible funding mechanism for such work, since this would not be a bet intending to make the borrower rich - it would just be a bet to survive, although you could get poor in the process.

Agree with many of the considerations above - the bar should probably rise somewhat after such a funding shortfall. One way to solve it in practice could be to sit down in the room with the old FTX FF team and ask "which XX% of your grants are you most enthusiastic about and why", and then (at least as an initial hypothesis; possibly requiring some further vetting) plan to fund that. The generalized point I'm trying to make is twofold: 1) that quite a bit of judgement already went into assessing these projects and it should be possible to use that to decide how many of them are above the bar, and 2) because all the other input factors (talent, project idea, vetting) are unchanged,  and assuming a standard shape of the EA production function, the marginal returns to funding should now be unusually high.


And David is right that (at least under some reasonable models) if you can predict that your bar will fall in the future, you should probably lower it already. I'm not exactly sure what the requirements would be for the funding bar to have a Martingale property (e.g., does it require some version of risk neutrality, or specific assumptions about the shape of the impact distribution across projects or time), but it seems reasonable to start with something close to that assumption, at least. However that still implies that when you experience a large, unexpected funding shortfall, the bar does need to rise somewhat.


Thank you for a good and swift response, and in particular, for stating so clearly that fraud cannot be justified on altruistic grounds.

I have only one quibble with the post: IMO you should probably increase your longtermist spending quite significantly over the next ~year or so, for the following reasons (which I'm sure you've already considered, but I'm stating them so others can also weigh in)

  • IIRC Open Philanthropy has historically argued that a lack of high-quality, shovel-ready projects has been limiting the growth in your longtermist portfolio. This is not the case at the moment. There will be projects that 1) have significant funding gaps, 2) have been vetted by people you trust for both their value alignment and competence, 3) are not only shovel-ready, but already started. Stepping in to help these projects bridge the gap until they can find new funding sources looks like an unusually cost-effective opportunity. It may also require somewhat less vetting on your end, which may matter more if you're unusually constrained by grantmaker capacity for a while
  • Temporarily ramping up funding can also be justified by considering likely flow-through effects of acting as an "insurer of last resort" for affected projects. Abrupt funding cutoffs is very costly for project founders in terms of added stress,  reduced capacity to focus on doing good, and possibly long-term career prospects. If the EA community doesn't step in to try and help the affected projects, we may expect some core team members to disengage from EA, or to shift towards less ambitious projects in the future. Furthermore, the next generation of potential founders will be watching. The more they see a community that's willing to shoulder the cost in a downturn, the more we can expect new founders to engage with EA and take on ambitious projects.

Thank you for your good work over the last months, and thank you for your commitment to integrity in these hard times. I'm sure this must also be hard for you on a personal level, so I hope you're able to find consolation in all the good that will be created from the projects you helped off the ground, and that you still find a home in the EA community. 


Hi Adam! Thanks for the detailed reply. From a brief look at your model, it seems you've understood my reasoning in this post correctly. I had indeed overlooked that their numbers were already discounted.

However, since they use a 3% discount rate and you use a 4% discount rate, you would still need to adjust for the difference. If we still assume that the economic impacts hit throughout your entire career, from 15 to 60 years into the future (note: 15 years into the future is not the average, but the initial year of impacts!), then you get to around $0.7 of NPV for each $1 today - much better than the $0.28 in my analysis, but still less than the $1 without discounting. Using this number, the result would be very close to GiveWell's 20% estimate. How curious!




Thank you for this post, this is excellent work! Are you aware of ongoing efforts for any of your proposed topics? I'm asking because I'd consider starting a project on some of the above.

Load more