Note that I think that the mechanisms I describe aren't specific to economics, but cover academic research generally-- and will also include most of how most AI safety researchers (even those not in academia) will have impact.
There are potentially major crux moments around AI, so there's also the potential to do an excellent job engineering real transformative systems to be safe at some point (but most AI safety researchers won't be doing that directly). I guess that perhaps the indirect routes to impact for AI safety might feel more exciting because they're more closely connected to the crucial moments -- e.g. you might hope to set some small piece of the paradigm that the eventual engineers of the crucial systems are using, or hope to support a culture of responsibility among AI researchers, to make it less likely that people at the key time ignore something they shouldn't have done.
Finally, I imagine quant trading is a non-starter for a longtermist who is succeeding in academic research. As a community, suppose we already have significant ongoing funding from 3 or so of the world's 3k billionaires. What good is an extra one-millionaire? Almost anyone's comparative advantage is more likely to lie in spending the money, but even more so if one can do so within academic research.
It seems quite wrong to me to present this as so clear-cut. I think if we don't get major extra funding the professional longtermist community might plateau at a stable size in perhaps the low thousands. A successful quantitative trader could support several more people at the margin (a very successful trader could support dozens). If you're a good fit for the crowd, it might also be a good group to network with.
If you're particularly optimistic about future funding growth, or pessimistic about community growth, you might think it's unlikely we end up in that world in a realistic timeframe, but there's likely to still be some hedging value.
To be clear, I mostly wouldn't want people in the OP's situation to drop the PhD to join a hedge fund. But it's worth understanding that e.g. the main routes to impact in academic research are probably:
I think for some people those just aren't going to be a great personal fit (even if they can achieve conventional "success" in academia!), so it's worth considering other options.
In this particular case, I'm kind of excited about getting more longtermist economists. But it might depend e.g. how disillusioned the OP is with the field as to whether it might make sense for them to be such a person.
I guess I wouldn't recommend the donor lottery to people who wouldn't be happy entering a regular lottery for their charitable giving (but I would usually recommend them to be happy with that regular lottery!).
Btw, I'm now understanding your suggestions as not really alternatives to the donor lottery, since I don't think you buy into its premises, but alternatives to e.g. EA Funds.
(In support of the premise of respecting individual autonomy about where to allocate money: I think that making requests to pool money in a way that rich donors expect to lose control would risk making EA pattern match at a surface level to a scam, and might drive people away. For a more extreme version of this, imagine someone claiming that as soon as you've decided to donate some money you should send it all to the One True EA Collective fund, so that it can be fairly distributed, and it would be a weird propagation of wealth to allow rich people to take any time to think about where to give their money; whether or not you think an optimal taxation system would equalise wealth much more, I think it's fairly clear that the extreme bid that everyone pool donations would be destructive because it would put off donors.
By dominant action I mean "is ~at least as good as other actions on ~every dimension, and better on at least one dimension".
My confusion is something like: there's no new money out there! Its a group of donors deciding to give individually or give collectively. So the perspective of "what will lead to optimal allocation of resources at the group level?" is the right one.
I don't think donor lotteries are primarily about collective giving. As a donor lottery entrant, I'd be just as happy giving $5k for a 5% chance of controlling a $100k pot of pooled winnings as entering a regular lottery where I could give $5k for a 5% chance of winning $100k (which I would then donate)*. In either case I think I'll do more than 20x as much good with $100k than $5k (mostly since I can spend longer thinking and investigating), so it's worthwhile in expectation.
* Except that I usually don't have good access to that kind of lottery (maybe there would also be tax implications, although perhaps it's fine if the money is all being donated). So the other donors are a logistical convenience, but not an integral part of the idea.
My understanding is that past people selected to allocate the pool haven't tended to delegate that allocation power. And indeed if you're strongly expecting to do so, why not just give the allocation power to that person beforehand, either over your individual donation (e.g. through an EA fund) or over a pool. Why go through the lottery stage?
I don't know that they should strongly expect to do so. But in any case the reason for going through the lottery stage is simple: maybe you'd want to take 50 hours thinking about whether to delegate and to whom, and vetting possible people to delegate to. That time might not be worth spending for a $5k donation, but become worth spending for a $100k donation. (Additionally the person you want to delegate to might be more likely to take the duty seriously for a larger amount of money.)
I think your analysis of the alternatives is mostly from the perspective of "what will lead to optimal allocation of resources at the group level?"
But the strongest case for donor lotteries, in my view, isn't in these terms at all. Rather, it's that entering a lottery is often a dominant action from the perspective of the individual donor (if most other things they would consider giving to don't exhibit noticeably diminishing returns over the amount they are attempting to get in the lottery). The winner of a lottery need not be the allocator for the money; they can instead e.g. decide to take longer thinking about whom they want to delegate allocation power to (I actually think this might often be the "technically correct" move; I don't know how often lottery winners act this way). This dominance argument would go through for a much smaller proportion of possible donors for your alternatives. I'm interested if you see another reason that people would donate to these?
I spent a little while thinking about this. My guess is that of the activities I list:
All of those numbers are super crude and I might well disagree with myself if I came back later and estimated again. They also depend on lots of details (like how good the individuals are at executing on those strategies).
Perhaps most importantly, they're excluding the internal benefits -- if these activities are (as I suggest) partly good for practicing some longtermist judgement, then I'd really want to see them as a complement to donation rather than just a competitor.
One argument goes via something like the reference class of global autopoeitic information-processing systems: life has persisted since it started several billion years ago; multicellular life similarly; sexual selection similarly. Sure, species go extinct when they're outcompeted, but the larger systems they're part of have only continued to thrive.
The right reference class (on this story) is not "humanity as a mammalian species" but "information-based civilization as the next step in faster evolution". Then we might be quite optimistic about civilization in some meaningful sense continuing indefinitely (though perhaps not about particular institutions or things that are recognisably human doing so).
Some fixed models also support macroscopic probabilities of indefinite survival: e.g. if in each generation each individual has a number of descendants drawn from a Poisson distribution with parameter 1.1, then there's a finite chance of extinction in each generation but these diminish fast enough (as the population gets enormous) that if you make it through an initial rocky period you're pretty much safe.
That model is clearly too optimistic because it doesn't admit crises with correlated problems across all the individuals in a generation. But then there's a question about how high is the unavoidable background rate of such crises (i.e. ones that remain even if you have a very sophisticated and well-resourced attempt to prevent them).
On current understanding I think the lower bounds for the rate of exogenous such events rely on things like false vacuum decay (and maybe GRBs while we're local enough), and those lower bounds are really quite low, so it's fairly plausible that the true rate is really low (though also plausible it's higher because there are risks that aren't observed/understood).
Bounding endogenous risk seems a bit harder to reason about. I think that you can give kind of fairytale/handwaving existence proofs of stable political systems (which might however be utterly horrific to us). Then it's at least sort of plausible that there would be systems which are simultaneously extremely stable and also desirable.
My primary blueprint is as follows:I want the world in 30 years time to be in as good a state as it can be in order to face whatever challenges that will come next.
My primary blueprint is as follows:
I want the world in 30 years time to be in as good a state as it can be in order to face whatever challenges that will come next.
I like this! I sometimes use a perspective which is pretty close (though often think about 50 years rather than 30 years, and hold it in conjunction with "what are the challenges we might need to face in the next 50 years?"). I think 30 vs 50 years is a kind-of interesting question. I've thought about 50 because if I imagine e.g. that we're going to face critical junctures with the development of AI in 40 years, that's within the scope where I can imagine it being impacted by causal pathways that I can envision -- e.g. critical technology being developed by people who studied under professors who are currently students making career decisions. By 60 years it feels a bit too tenuous for me to hold on to.
I kind of agree that if looking at policy specifically a shorter time horizon feels good.
I appreciate the pushback!
I have two different responses (somewhat in tension with each other):