Meta:
- I'm re-posting this from my Shortform (with minor edits) because someone indicated it might be useful to apply tags to this post.
- This was originally written as quick summary of my current (potentially flawed) understanding in an email conversation.
- I'm not that familiar with the human progress/progress studies communities and would be grateful if people pointed out where my impression of them seems off, as well as for takes on whether I seem correct about what the key points of agreement and disagreement are.
- I think some important omissions from my summary might include:
- Potential differences in underlying ethical views
- More detail on why at least some 'progress studies' proponents have significantly lower estimates for existential risk this century, and potential empirical differences regarding how to best mitigate existential risk.
- Another caveat is that both the progress studies and the longtermist EA communities are sufficiently large that there will be significant diversity of views within these communities - which my summary sweeps under the rug.
[See also this reply from Tony from the 'progress studies' community .]
Here's a quick summary of my understanding of the 'longtermist EA' and 'progress studies' perspectives, in a somewhat cartoonish way to gesture at points of agreement and disagreement.
EA and progress studies mostly agree about the past. In particular, they agree that the Industrial Revolution was a really big deal for human well-being, and that this is often overlooked/undervalued. E.g., here's a blog post by someone somewhat influential in EA:
https://lukemuehlhauser.com/industrial-revolution/
Looking to the future, the progress studies community is most worried about the Great Stagnation. They are nervous that science seems to be slowing down, that ideas are getting harder to find, and that economic growth may soon be over. Industrial-Revolution-level progress was by far the best thing that ever happened to humanity, but we're at risk of losing it. That seems really bad. We need a new science of progress to understand how to keep it going. Probably this will eventually require a number of technological and institutional innovations since our current academic and economic systems are what's led us into the current slowdown.
If we were making a list of the most globally consequential developments from the past, EAs would in addition to the Industrial Revolution point to the Manhattan Project and the hydrogen bomb: the point in time when humanity first developed the means to destroy itself. (They might also think of factory farming as an example for how progress might be great for some but horrible for others, at least on some moral views.) So while they agree that the world has been getting a lot better thanks to progress, they're also concerned that progress exposes us to new nuclear-bomb-style risks. Regarding the future, they're most worried about existential risk - the prospect of permanently forfeiting our potential of a future that's much better than the status quo. Permanent stagnation would be an existential risk, but EAs tend to be even more worried about catastrophes from emerging technologies such as misaligned artificial intelligence or engineered pandemics. They might also be worried about a potential war between the US and China, or about extreme climate change. So in a sense they aren't as worried about progress stopping than they are about progress being mismanaged and having catastrophic unintended consequences. They therefore aim for 'differential progress' - accelerating those kinds of technological or societal change that would safeguard us against these catastrophic risks, and slowing down whatever would expose us to greater risk. So concretely they are into things like "AI safety" or "biosecurity" - e.g., making machine learning systems more transparent so we could tell if they were trying to deceive their users, or implementing better norms around the publication of dual-use bio research.
The single best book on this EA perspective is probably The Precipice by my FHI colleague Toby Ord.
Overall, EA and the progress studies perspective agree on a lot - they're probably closer than either would be to any other popular 'worldview'. But overall EAs probably tend to think that human progress proponents are too indiscriminately optimistic about further progress, and too generically focused on keeping progress going. (Both because it might be risky and because EAs probably tend to be more "optimistic" that progress will accelerate anyway, most notably due to advances in AI.) Conversely, human progress proponents tend to think that EA is insufficiently focused on ensuring a future of significant economic growth and the risks imagined by EAs either aren't real or that we can't do much to prevent them except encouraging innovation in general.
I hope to have time to read your comment and reply in more detail later, but for now just one quick point because I realize my previous comment was unclear:
I am actually sympathetic to an "'egoistic', agent-relative, or otherwise nonconsequentialist perspective". I think overall my actions are basically controlled by some kind of bargain/compromise between such a perspective (or perhaps perspectives) and impartial consequentialism.
The point is just that, from within these other perspectives, I happen to not be that interested in "impartially maximize value over the next few hundres of years". I endorse helping my friends, maybe I endorse volunteering in a soup kitchen or something like that; I also endorse being vegetarian or donating to AMF, or otherwise reducing global poverty and inequality (and yes, within these 'causes' I tend to prefer larger over smaller effects); I also endorse reducing far-future s-risks and current wild animal suffering, but not quite as much. But this is all more guided by responding to reactive attitudes like resentment and indignation than by any moral theory. It looks a lot like moral particularism, and so it's somewhat hard to move me with arguments in that domain (it's not impossible, but it would require something that's more similar to psychotherapy or raising a child or "things the humanities do" than to doing analytic philosophy).
So this roughly means that if you wanted to convince me to do X, then you either need to be "lucky" that X is among the things I happen to like for idiosyncratic reasons - or X needs to look like a priority from an impartially consequentialist outlook.