zdgroff

PhD Candidate, Department of Economics @ Stanford University
1451 karmaJoined May 2015Pursuing a doctoral degree (e.g. PhD)
zachfreitasgroff.com

Bio

Participation
4

I am a PhD candidate in Economics at Stanford University. Within effective altruism, I am interested in broad longtermism, long-term institutions and values, and animal welfare. In economics, my areas of interest include political economy, behavioral economics, and public economics.

Comments
235

Very strong +1 to all this. I honestly think it's the most neglected area relative to its importance right now. It seems plausible that the vast majority of future beings will be digital, so it would be surprising if longtermism does not imply much more attention to the issue.

zdgroff
4d147

I take 5%-60% as an estimate of how much of human civilization's future value will depend on what AI systems do, but it does not necessarily exclude human autonomy. If humans determine what AI systems do with the resources they acquire and the actions they take, then AI could be extremely important, and humans would still retain autonomy.

I don't think this really left me more or less concerned about losing autonomy over resources. It does feel like this exercise made it starker that there's a large chance of AI reshaping the world beyond human extinction. It's not clear how much of that means the loss of human autonomy. I'm inclined to think in rough, nebulous terms that AI will erode human autonomy over 10% of our future, taking 10% as a sort of midpoint between the extinction likelihood and the degree of AI influence over our future. I think my previous views would have been in that ballpark.

The exercise did lead me to think the importance of AI is higher than I previously did and the likelihood of extinction per se is lower (though my final beliefs place all these probabilities higher than the priors in the report).

zdgroff
4mo20

I guess I would think that if one wants to argue for democracy as an intrinsic good, that would get you global democracy (and global control of EA funds), and it's practical and instrumental considerations (which, anyway, are all the considerations in my view) that bite against it.

zdgroff
4mo15-16

It seems like the critics would claim that EA is, if not coercing or subjugating, at least substantially influencing something like the world population in a way that meets the criteria for democratisation. This seems to be the claim in arguments about billionaire philanthropy, for example. I'm not defending or vouching for that claim, but I think whether we are in a sufficiently different situation may be contentious.

zdgroff
5mo31

Well I think MIC relies on some sort of discontinuity this century, and when we start getting into the range of precedented growth rates, the discontinuity looks less likely.

But we might not be disagreeing much here. It seems like a plausibly important update, but I'm not sure how large.

zdgroff
5mo1412

This is a valuable point, but I do think that giving real weight to a world where we have neither extinction nor 30% growth would still be an update to important views about superhuman AI. It seems like evidence against the Most Important Century thesis, for example.

zdgroff
5mo91

It might be challenging to borrow (though I'm not sure), but there seem to be plenty of sophisticated entities that should be selling off their bonds and aren't. The top-level comment does cut into the gains from shorting (as the OP concedes), but I think it's right that there are borrowing-esque things to do.

zdgroff
5mo74

I'm trying to make sure I understand: Is this (a more colorful version) of the same point as the OP makes at the end of "Bet on real rates rising"?

The other risk that could motivate not making this bet is the risk that the market – for some unspecified reason – never has a chance to correct, because (1) transformative AI ends up unaligned and (2) humanity’s conversion into paperclips occurs overnight. This would prevent the market from ever “waking up”.

However, to be clear, expecting this specific scenario requires both: 

  1. Buying into specific stories about how takeoff will occur: specifically, Yudkowskian foom-type scenarios with fast takeoff.
  2. Having a lot of skepticism about the optimization forces pushing financial markets towards informational efficiency.

You should be sure that your beliefs are actually congruent with these requirements, if you want to refuse to bet that real rates will rise. Additionally, we will see that the second suggestion in this section (“impatient philanthropy”) is not affected by the possibility of foom scenarios.

zdgroff
5mo3733

It doesn't seem all that relevant to me whether traders have a probability like that in their heads. Whether they have a low probability or are not thinking about it, they're approximately leaving money on the table in a short-timelines world, which should be surprising. People have a large incentive to hunt for important probabilities they're ignoring.

Of course, there are examples (cf. behavioral economics) of systemic biases in markets. But even within behavioral economics, it's fairly commonly known that it's hard to find ongoing, large-scale biases in financial markets.

Load more