Econ PhD focusing on global priorities research, ex-McKinsey Global Institute fellow. Founder of McKinsey Effective Altruism community and board member of EA Norway. Follow me on Twitter at @jgraabak
I see that I wasn't being super clear above. Others in the comments have pointed to what I was trying to say here:
- The window between when "enough" traders realize that AI is near and when it arrives may be very short, meaning that even in the best case you'll only increase your wealth for a very short time by making this bet
- It is not clear how markets would respond if most traders started thinking that AI was near. They may focus on other opportunities that they believe are stronger than to go short interest rates (e.g., they may decide to invest in tech companies), or they may decide to take some vacation
- In order to get the benefits of the best case above, you need to take on massive interest rate risk, so the downside is potentially much larger than the upside (plus, in the downside case, you're poor for a much longer time)
Therefore, traders may choose not to short interest rates, even if they believe AI is imminent
(a short additional note here: yes some of this is addressed more at length in the post, e.g., in section X re my point 3, but IMO the authors are somewhat too strongly stating their case in those sections. You do not need a Yudkowskian "foom" scenario to happen overnight for the following point to be plausible: "timelines may be short-ish, say ~10 years, but the world will not realize until quite soon before, say 1-3 years, and in the meantime it won't make sense to bet on interest rate movements for most people")
While this is a very valuable post, I don't think the core argument quite holds, for the following reasons:
Agree with many of the considerations above - the bar should probably rise somewhat after such a funding shortfall. One way to solve it in practice could be to sit down in the room with the old FTX FF team and ask "which XX% of your grants are you most enthusiastic about and why", and then (at least as an initial hypothesis; possibly requiring some further vetting) plan to fund that. The generalized point I'm trying to make is twofold: 1) that quite a bit of judgement already went into assessing these projects and it should be possible to use that to decide how many of them are above the bar, and 2) because all the other input factors (talent, project idea, vetting) are unchanged, and assuming a standard shape of the EA production function, the marginal returns to funding should now be unusually high.
And David is right that (at least under some reasonable models) if you can predict that your bar will fall in the future, you should probably lower it already. I'm not exactly sure what the requirements would be for the funding bar to have a Martingale property (e.g., does it require some version of risk neutrality, or specific assumptions about the shape of the impact distribution across projects or time), but it seems reasonable to start with something close to that assumption, at least. However that still implies that when you experience a large, unexpected funding shortfall, the bar does need to rise somewhat.
Thank you for a good and swift response, and in particular, for stating so clearly that fraud cannot be justified on altruistic grounds.
I have only one quibble with the post: IMO you should probably increase your longtermist spending quite significantly over the next ~year or so, for the following reasons (which I'm sure you've already considered, but I'm stating them so others can also weigh in)
Thank you for your good work over the last months, and thank you for your commitment to integrity in these hard times. I'm sure this must also be hard for you on a personal level, so I hope you're able to find consolation in all the good that will be created from the projects you helped off the ground, and that you still find a home in the EA community.
Hi Adam! Thanks for the detailed reply. From a brief look at your model, it seems you've understood my reasoning in this post correctly. I had indeed overlooked that their numbers were already discounted.
However, since they use a 3% discount rate and you use a 4% discount rate, you would still need to adjust for the difference. If we still assume that the economic impacts hit throughout your entire career, from 15 to 60 years into the future (note: 15 years into the future is not the average, but the initial year of impacts!), then you get to around $0.7 of NPV for each $1 today - much better than the $0.28 in my analysis, but still less than the $1 without discounting. Using this number, the result would be very close to GiveWell's 20% estimate. How curious!
Best,
Jakob
Thank you for this post, this is excellent work! Are you aware of ongoing efforts for any of your proposed topics? I'm asking because I'd consider starting a project on some of the above.
I think I'll try and type up my objections in a post rather than a comment - it seems to me that this post is so close to being right that it takes effort to pinpoint the exact place where I disagree, and so I want to take the time to formalize it a bit more.
But in short, I think it's possible to have 1) rational traders, 2) markets that largely function well, and 3) still no 5+ year advance signal of AGI in the markets, without making very weird assumptions. (note: I choose the 5+ year timeline because I think once you get really close to AGI, say, less than 1 year and lots of weird stuff going on, then you'd at least see some turbulence in the markets as folks are getting confused about how to trade in this very strange situation, so I do think the markets are providing some evidence against extremely short timelines)