Meta:
- I'm re-posting this from my Shortform (with minor edits) because someone indicated it might be useful to apply tags to this post.
- This was originally written as quick summary of my current (potentially flawed) understanding in an email conversation.
- I'm not that familiar with the human progress/progress studies communities and would be grateful if people pointed out where my impression of them seems off, as well as for takes on whether I seem correct about what the key points of agreement and disagreement are.
- I think some important omissions from my summary might include:
- Potential differences in underlying ethical views
- More detail on why at least some 'progress studies' proponents have significantly lower estimates for existential risk this century, and potential empirical differences regarding how to best mitigate existential risk.
- Another caveat is that both the progress studies and the longtermist EA communities are sufficiently large that there will be significant diversity of views within these communities - which my summary sweeps under the rug.
[See also this reply from Tony from the 'progress studies' community .]
Here's a quick summary of my understanding of the 'longtermist EA' and 'progress studies' perspectives, in a somewhat cartoonish way to gesture at points of agreement and disagreement.
EA and progress studies mostly agree about the past. In particular, they agree that the Industrial Revolution was a really big deal for human well-being, and that this is often overlooked/undervalued. E.g., here's a blog post by someone somewhat influential in EA:
https://lukemuehlhauser.com/industrial-revolution/
Looking to the future, the progress studies community is most worried about the Great Stagnation. They are nervous that science seems to be slowing down, that ideas are getting harder to find, and that economic growth may soon be over. Industrial-Revolution-level progress was by far the best thing that ever happened to humanity, but we're at risk of losing it. That seems really bad. We need a new science of progress to understand how to keep it going. Probably this will eventually require a number of technological and institutional innovations since our current academic and economic systems are what's led us into the current slowdown.
If we were making a list of the most globally consequential developments from the past, EAs would in addition to the Industrial Revolution point to the Manhattan Project and the hydrogen bomb: the point in time when humanity first developed the means to destroy itself. (They might also think of factory farming as an example for how progress might be great for some but horrible for others, at least on some moral views.) So while they agree that the world has been getting a lot better thanks to progress, they're also concerned that progress exposes us to new nuclear-bomb-style risks. Regarding the future, they're most worried about existential risk - the prospect of permanently forfeiting our potential of a future that's much better than the status quo. Permanent stagnation would be an existential risk, but EAs tend to be even more worried about catastrophes from emerging technologies such as misaligned artificial intelligence or engineered pandemics. They might also be worried about a potential war between the US and China, or about extreme climate change. So in a sense they aren't as worried about progress stopping than they are about progress being mismanaged and having catastrophic unintended consequences. They therefore aim for 'differential progress' - accelerating those kinds of technological or societal change that would safeguard us against these catastrophic risks, and slowing down whatever would expose us to greater risk. So concretely they are into things like "AI safety" or "biosecurity" - e.g., making machine learning systems more transparent so we could tell if they were trying to deceive their users, or implementing better norms around the publication of dual-use bio research.
The single best book on this EA perspective is probably The Precipice by my FHI colleague Toby Ord.
Overall, EA and the progress studies perspective agree on a lot - they're probably closer than either would be to any other popular 'worldview'. But overall EAs probably tend to think that human progress proponents are too indiscriminately optimistic about further progress, and too generically focused on keeping progress going. (Both because it might be risky and because EAs probably tend to be more "optimistic" that progress will accelerate anyway, most notably due to advances in AI.) Conversely, human progress proponents tend to think that EA is insufficiently focused on ensuring a future of significant economic growth and the risks imagined by EAs either aren't real or that we can't do much to prevent them except encouraging innovation in general.
I think this actually does point to a legitimate and somewhat open question on how to deal with uncertainty between different 'worldviews'. Similar to Open Phil, I'm using worldview to refer to a set of fundamental beliefs that are an entangled mix of philosophical and empirical claims and values.
E.g., suppose I'm uncertain between:
One way to deal with this uncertainty is to put both value on a "common scale", and then apply expected value: perhaps on worldview A, I can avert quintillions of expected deaths while on worldview B "only" a trillions lives are at stake in my decision. Even if I only have a low credence in A, after applying expected value I will then end up making decisions based just on A.
But this is not the only game in town. We might instead think of A and B as two groups of people with different interests trying to negotiate an agreement. In that case, we may have the intuition that A should make some concessions to B even if A was a much larger group, or was more powerful, or similar. This can motivate ideas such as variance normalization or the 'parliamentary approach'.
(See more generally: normative uncertainty.)
Now, I do have views on this matter that don't make me very sympathetic to allocating a significant chunk of my resources to, say, speeding up economic growth or other things someone concerned about next few decades might prioritize. (Both because of my views on normative uncertainty and because I'm not aware of anything sufficiently close to 'worldview B' that I find sufficiently plausible - these kind of worldviews from my perspective sit in too awkward a spot between impartial consequentialism and a much more 'egoistic', agent-relative, or otherwise nonconsequentialist perspective.)
But I do think that the most likely way that someone could convince me to, say, donate a signifcant fraction of my income to 'progress studies' or AMF or The Good Food Institute (etc.) would be by convincing me that actually I want to aggregate different 'worldviews' I find plausible in a different way. This certainly seems more likely to change my mind than an argument aiming to show that, when we take longtermism for granted, we should prioritize one of these other things.
[ETA: I forgot to add that another major consideration is that, at least on some plausible estimates and my own best guess, existential risk this century is so high - and our ability to reduce it sufficiently good - that even if I thought I should prioritize primarily based on short time scales, I might well end up prioritizing reducing x-risk anyway. See also, e.g., here.]