Owen_Cotton-Barratt

Comments

Should I transition from economics to AI research?

Note that I think that the mechanisms I describe aren't specific to economics, but cover academic research generally-- and will also  include most of how most AI safety researchers (even those not in academia) will have impact.

There are potentially major crux moments around AI, so there's also the potential to do an excellent job engineering real transformative systems to be safe at some point (but most AI safety researchers won't be doing that directly). I guess that perhaps the indirect routes to impact for AI safety might feel more exciting because they're more closely connected to the crucial moments -- e.g. you might hope to set some small piece of the paradigm that the eventual engineers of the crucial systems are using, or hope to support a culture of responsibility among AI researchers, to make it less likely that people at the key time ignore something they shouldn't have done.

Should I transition from economics to AI research?

Finally, I imagine quant trading is a non-starter for a longtermist who is succeeding in academic research. As a community, suppose we already have significant ongoing funding from 3 or so of the world's 3k billionaires. What good is an extra one-millionaire? Almost anyone's comparative advantage is more likely to lie in spending the money, but even more so if one can do so within academic research.

It seems quite wrong to me to present this as so clear-cut. I think if we don't get major extra funding the professional longtermist community might plateau at a stable size in perhaps the low thousands. A successful quantitative trader could support several more people at the margin (a very successful trader could support dozens). If you're a good fit for the crowd, it might also be a good group to network with.

If you're particularly optimistic about future funding growth, or pessimistic about community growth, you might think it's unlikely we end up in that world in a realistic timeframe, but there's likely to still be some hedging value.

To be clear, I mostly wouldn't want people in the OP's situation to drop the PhD to join a hedge fund. But it's worth understanding that e.g. the main routes to impact in academic research are probably: 

  1. Providing leadership for the academic field from within the field, including:
    1. Paradigm-setting
    2. Culture-setting
  2. Helping students orient to what's important, and providing space for them to work on more important projects
  3. Using academia as a springboard to affect non-academic projects (e.g. being an advisor on particular policy topics, or providing solid support for claims that are broadly useful)

I think for some people those just aren't going to be a great personal fit (even if they can achieve conventional "success" in academia!), so it's worth considering other options.

In this particular case, I'm kind of excited about getting more longtermist economists. But it might depend e.g. how disillusioned the OP is with the field as to whether it might make sense for them to be such a person.

Alternatives to donor lotteries

I guess I wouldn't recommend the donor lottery to people who wouldn't be happy entering a regular lottery for their charitable giving (but I would usually recommend them to be happy with that regular lottery!).

Btw, I'm now understanding your suggestions as not really alternatives to the donor lottery, since I don't think you buy into its premises, but alternatives to e.g. EA Funds.

(In support of the premise of respecting individual autonomy about where to allocate money: I think that making requests to pool money in a way that rich donors expect to lose control would risk making EA pattern match at a surface level to a scam, and might drive people away. For a more extreme version of this, imagine someone claiming that as soon as you've decided to donate some money you should send it all to the One True EA Collective fund, so that it can be fairly distributed, and it would be a weird propagation of wealth to allow rich people to take any time to think about where to give their money; whether or not you think an optimal taxation system would equalise wealth much more, I think it's fairly clear that the extreme bid that everyone pool donations would be destructive because it would put off donors.

Alternatives to donor lotteries

By dominant action I mean "is ~at least as good as other actions on ~every dimension, and better on at least one dimension".

My confusion is something like: there's no new money out there! Its a group of donors deciding to give individually or give collectively. So the perspective of "what will lead to optimal allocation of resources at the group level?" is the right one.

I don't think donor lotteries are primarily about collective giving. As a donor lottery entrant, I'd be just as happy giving $5k for a 5% chance of controlling a $100k pot of pooled winnings as entering a regular lottery where I could give $5k for a 5% chance of winning $100k (which I would then donate)*. In either case I think I'll do more than 20x as much good with $100k than $5k (mostly since I can spend longer thinking and investigating), so it's worthwhile in expectation.

* Except that I usually don't have good access to that kind of lottery (maybe there would also be tax implications, although perhaps it's fine if the money is all being donated). So the other donors are a logistical convenience, but not an integral part of the idea.

My understanding is that past people selected to allocate the pool haven't tended to delegate that allocation power. And indeed if you're strongly expecting to do so, why not just give the allocation power to that person beforehand, either over your individual donation (e.g. through an EA fund) or over a pool. Why go through the lottery stage?

I don't know that they should strongly expect to do so. But in any case the reason for going through the lottery stage is simple: maybe you'd want to take 50 hours thinking about whether to delegate and to whom, and vetting possible people to delegate to. That time might not be worth spending for a $5k donation, but become worth spending for a $100k donation. (Additionally the person you want to delegate to might be more likely to take the duty seriously for a larger amount of money.)

Alternatives to donor lotteries

I think your analysis of the alternatives is mostly from the perspective of "what will lead to optimal allocation of resources at the group level?"

But the strongest case for donor lotteries, in my view, isn't in these terms at all. Rather, it's that entering a lottery is often a dominant action from the perspective of the individual donor (if most other things they would consider giving to don't exhibit noticeably diminishing returns over the amount they are attempting to get in the lottery). The winner of a lottery need not be the allocator for the money; they can instead e.g. decide to take longer thinking about whom they want to delegate allocation power to (I actually think this might often be the "technically correct" move; I don't know how often lottery winners act this way). This dominance argument would go through for a much smaller proportion of possible donors for your alternatives. I'm interested if you see another reason that people would donate to these?

Everyday Longtermism

I spent a little while thinking about this. My guess is that of the activities I list:

  • Alice and Bob's efforts look comparable to donating (in external benefit/effort) when the longtermist portfolio is around $100B-$1T/year
  • Clara's efforts looks comparable to donating when the longtermist portfolio is around $1B-$10B/year
  • Diya's efforts look comparable to donating when the longtermist portfolio is around $10B-$100B/year
  • Elmo's efforts are harder to say because they're closer to directly trying to grow longtermist support, so the value diminishes as the existing portfolio gets larger just as for donations, and it more depends on underlying quality

All of those numbers are super crude and I might well disagree with myself if I came back later and estimated again. They also depend on lots of details (like how good the individuals are at executing on those strategies).

Perhaps most importantly, they're excluding the internal benefits -- if these activities are (as I suggest) partly good for practicing some longtermist judgement, then I'd really want to see them as a complement to donation rather than just a competitor.

AGB's Shortform

One argument goes via something like the reference class of global autopoeitic information-processing systems: life has persisted since it started several billion years ago; multicellular life similarly; sexual selection similarly. Sure, species go extinct when they're outcompeted, but the larger systems they're part of have only continued to thrive.

The right reference class (on this story) is not "humanity as a mammalian species" but "information-based civilization as the next step in faster evolution". Then we might be quite optimistic about civilization in some meaningful sense continuing indefinitely (though perhaps not about particular institutions or things that are recognisably human doing so).

AGB's Shortform

Some fixed models also support macroscopic probabilities of indefinite survival: e.g. if in each generation each individual has a number of descendants drawn from a Poisson distribution with parameter 1.1, then there's a finite chance of extinction in each generation but these diminish fast enough (as the population gets enormous) that if you make it through an initial rocky period you're pretty much safe.

That model is clearly too optimistic because it doesn't admit crises with correlated problems across all the individuals in a generation. But then there's a question about how high is the unavoidable background rate of such crises (i.e. ones that remain even if you have a very sophisticated and well-resourced attempt to prevent them).

On current understanding I think the lower bounds for the rate of exogenous such events rely on things like false vacuum decay (and maybe GRBs while we're local enough), and those lower bounds are really quite low, so it's fairly plausible that the true rate is really low (though also plausible it's higher because there are risks that aren't observed/understood).

Bounding endogenous risk seems a bit harder to reason about. I think that you can give kind of fairytale/handwaving existence proofs of stable political systems (which might however be utterly horrific to us). Then it's at least sort of plausible that there would be systems which are simultaneously extremely stable and also desirable.

Blueprints (& lenses) for longtermist decision-making

My primary blueprint is as follows:

I want the world in 30 years time to be in as good a state as it can be in order to face whatever challenges that will come next.

I like this! I sometimes use a perspective which is pretty close (though often think about 50 years rather than 30 years, and hold it in conjunction with "what are the challenges we might need to face in the next 50 years?"). I think 30 vs 50 years is a kind-of interesting question. I've thought about 50 because if I imagine e.g. that we're going to face critical junctures with the development of AI in 40 years, that's within the scope where I can imagine it being impacted by causal pathways that I can envision -- e.g. critical technology being developed by people who studied under professors who are currently students making career decisions. By 60 years it feels a bit too tenuous for me to hold on to.

I kind of agree that if looking at policy specifically a shorter time horizon feels good.

Everyday Longtermism

I appreciate the pushback!

I have two different responses (somewhat in tension with each other):

  1. Finding "everyday" things to do will necessitate identifying what's good to do in various situations which aren't the highest-value activity an individual can be undertaking
    • This is an important part of deepening the cultural understanding of longtermism, rather than have all of the discussion be about what's good to do in a particular set of activities that's had strong selection pressure on it
      • This is also important for giving people inroads to be able to practice different aspects of longtermism
      • I think it's a bit like how informal EA discourse often touches on how to do everyday things efficiently (e.g. "here are tips for batching your grocery shopping") -- it's not that these are the most important things to be efficient about, but that all-else-equal it's good, and it's also very good to give people micro-scale opportunities to put efficiency-thinking into practice
    • Note however that my examples would be better if they had more texture:
      • Discussion of the nuance of better or worse versions of the activities discussed could be quite helpful for conveying the nuance of what is good longtermist action
      • To the extent that these are far from the highest value activities those people could be undertaking, it seems important to be up-front about that: keeping tabs on what's relatively important is surely an important part of the (longtermist) EA culture
  2. I'm not sure how much I agree with "probably much less positive than some other things that could be done even by "regular people', even once there are millions or tens of millions of longtermists"
    • I'd love to hear your ideas for things that you think would be much more positive for those people in that world
      • My gut feeling is that they are at the level of "competitive uses of time/attention (for people who aren't bought into reorienting their whole lives) by the time there are tens of millions of longtermists"
        • It seems compatible with that feeling that there could be some higher-priority things for them to be doing as well -- e.g. maybe some way of keeping immersed in longtermist culture, by being a member of some group -- but that those reach saturation or diminishing returns
        • I think I might be miscalibrated about this; think it would be easier to discuss with some concrete competition on the table
    • Of course to the extent that these actually are arguably competitive actions, if I believe my first point, maybe I should have been looking for even more everyday situations
      • e.g. could ask "what is the good longtermist way to approach going to the shops? meeting a romantic partner's parents for the first time? deciding how much to push yourself to work when you're feeling a bit unwell?"
Load More