In the recent article Some promising career ideas beyond 80,000 Hours' priority paths, Arden Koehler (on behalf of the 80,000 Hours team) highlights the pathway “Become a historian focusing on large societal trends, inflection points, progress, or collapse”. I share the view that historical research is plausibly highly impactful, and I’d be excited to see more people explore that area.
I commented on that article to list some history topics I’d be excited to see people investigate, as well as to provide some general thoughts on the intersection of history research and effective altruism. Arden suggested I adapt that comment into a top-level post, which led me to write this.
Note that:
- As zdgroff points out, you don’t actually have to be a historian to do this sort of historical research. (I’d add that you don’t even necessarily have to be in academia at all.)
- I’m sure there’s at least some relevant existing work on each of these topics. What I’m suggesting is that it seems likely there’s room for more work, work better targeted towards informing decisions in areas EAs care about, and/or summaries and syntheses of existing work (for EAs unfamiliar with that work).
- I have basically no background in academic history myself, am only ~6 months into my EA-aligned research career, and wrote this post fairly quickly.
- I lean towards longtermism, which influenced which history topics came to mind for me.
- Thus, this post should be seen merely as a starting point. I expect I’ve failed to include some topics it could be valuable to investigate.
- I’d therefore be really keen to see people comment on this post to mention additional topics, their thoughts or criticisms regarding anything I say here, or additional general thoughts on the intersection of history research and EA.
10 history topics it might be very valuable to investigate
(Note: The article Some promising career ideas beyond 80,000 Hours' priority paths also mentions something similar to the 1st and 3rd of these topics.)
1. The history of various types of growth and progress (economic, intellectual, technological, moral, political, etc.)
Investigations into this topic could give us evidence about:
- What developments are likely in the future
- How tractable influencing the speed or direction of various forms of growth and progress might be, and what the best interventions for doing that might be
- See also We Need a New Science of Progress.
- How severe and lasting the consequences of civilizational collapse and global (but non-existential) catastrophes might be, and thus how much we should prioritise work on those issues
- For example, let’s say humanity is currently experiencing various positive trends, but we discover these trends aren’t very common across different times and societies and appear to depend on many conditions being just right. We might then have additional reason to see that trend as “fragile” and worth protecting from various types of disruptions.
- See The long-term significance of reducing global catastrophic risks and Civilization Re-Emerging After a Catastrophic Collapse.
I'd include as part of this topic research into trends in various forms of violence over time. See e.g. The Better Angels of Our Nature and What are the implications of the offence-defence balance for trajectories of violence?
2. The history of societal collapse and recovery
Investigations into this topic could provide evidence about things like how high existential and global catastrophic risks are, how likely humanity is to recover from a collapse, how civilization might be changed by the process of collapse and recovery, and what we can do to reduce chances of collapse and/or increases chances of a positive recovery.
Some relevant sources can be found here.
3. The history of the growth, influence, collapse, etc. of various social and intellectual movements
Investigations into this topic could provide evidence relevant to what might happen to the EA movement or related movements (e.g., the rationality, animal advocacy, and AI safety communities). That could in turn help us assess how valuable an intervention that relies on the continued presence of a particular movement is, how much we should prioritise activities that would be robust to some degree of movement collapse, how valuable movement-building activities are, and what our philanthropic discount rate should be.
In addition to informing our predictions of how certain movements might grow, have influence, collapse, etc., investigations of this topic could inform our efforts to positively influence those processes. For example, if we learn more about what factors seem to have often made the collapse of movements somewhat similar to EA more likely, we can try to avoid or counteract such factors.
Some relevant sources can be found here.
4. The history of efforts to regulate technology (or otherwise influence the direction or applications of technological development)
See Grace and Grace. (I haven’t properly read those works, but they seem relevant to this topic, as well as the next topic topic.)
See here for sources related to differential progress, differential intellectual progress, and/or differential technological development.
5. The history of proliferation and nonproliferation efforts in the case of nuclear weapons or other weapons/technologies
This is of course related to the previous topic.
6. The history of predictions (especially long-range predictions and predictions of things like extinction), millenarianism, and how often people have been right vs wrong about these and other things
Investigations into this topic could give us evidence relevant to how much to trust predictions of various kinds, which is relevant to things like whether we're at the Hinge of History and how high existential risk is. We currently seem to know very little about this. See e.g. Muehlhauser, Aird, and Aird (draft; relevant section begins "How often have").
7. The history of moral circle expansion
Investigations into this topic could inform future efforts to expand moral circles along various dimensions (e.g., to nonhuman animals, to future humans, to future digital minds). Such investigations could also perhaps inform us on questions like how good the future is likely to be “by default”, and how much we should prioritise preventing extinction vs improving humanity’s likely trajectory conditional on survival (see Crucial questions for longtermists: Overview).
See here for some relevant sources.
8. The history of legal and political efforts to represent or benefit various neglected populations (future generations, animals, slaves, etc.)
Investigations into this topic could help us assess how much we should prioritise more efforts of this kind, and how best to implement such efforts. Additionally, as with investigations of the history of moral circle expansion, investigations of this topic could also perhaps inform us on questions like how good the future is likely to be “by default”, and how much we should prioritise preventing extinction vs improving humanity’s likely trajectory conditional on survival.
I expect some related work has been done by Tyler John (who has written Longtermist Institutional Design and Policy: A Literature Review) and the UK’s All-Party Parliamentary Group for Future Generations. But I don’t actually know the details of these people’s work.
9. Counterfactual history related to what factors might’ve led various totalitarian regimes to last a long time, and how long they might’ve lasted if those factors had been present
Relevant regimes include Nazi Germany and the Soviet Union.
Investigations of this topic could inform how high the risk from dystopias/totalitarianism is, and how we can reduce that risk.
I'd guess that mainstream historians won’t have neglected the question of what factors might have led those regimes to last, but will have neglected the question of just how long those regimes could’ve lasted. But that’s purely a guess.
See here for some relevant sources.
10. The history of risks and harms from individuals with above-average levels of various psychological traits (e.g., sadism, psychopathy, narcissism, machiavellianism)
For an idea of why this topic might be important, what some key questions might be, and what decisions could be informed by research into this topic, see Reducing long-term risks from malevolent actors.
This might involve looking into:
- The risks and harms caused by individuals like Hitler, Stalin, Mao, and Genghis Khan
- Whether these individuals indeed seem to have had high levels of relevant psychological traits, and whether those high levels seem to have preceded or followed them gaining power
- What conditions or institutions seem to have been successful in limiting risks and harms from such individuals in certain times and places
- Other such things.
(I may soon start doing research somewhat related to this topic. So if this topic seems interesting to you, feel free to get in touch.)
General thoughts on the intersection of history research and EA
From what I’ve seen, it seems that a recurring theme is that:
- EAs without a background in history have done relatively brief analyses of many of the above topics.
- Some such analyses can be found via some of the above links.
- To be clear, I’m not saying these analyses were bad, and in fact I’ve quite appreciated many of them.
- Other people have found those analyses very interesting, and have possibly made big decisions based on them.
- But there’s been no deeper or more rigorous follow-up analysis.
And I think there are also some of those topics, or some subtopics, that haven’t even had a brief analysis from EAs.
I know less about how neglected these topics are within mainstream academia. But it seems likely that there’s at least room for summaries and syntheses for EAs, and/or investigations that are better targeted towards informing decisions in areas EAs care about.
I'd therefore be quite excited to see more people in EA (or at least interacting with EA; see Community vs Network) who are skilled at and interested in history research. As noted above, such people could be historians, but could also be other academics or even people outside of academia.
A potential counterexample to the above “recurring theme” is AI Impacts' research into “historic cases of discontinuously fast technological progress”. My understanding is that that research has indeed been done by EAs without a background in history, but also that it seems quite thorough and rigorous, and possibly more useful for informing key decisions on that topic than work on that topic by most academic historians would’ve been. (But I hold that view very tentatively, and haven’t looked into that work in great detail.) I'm not sure if that's evidence for or against the value of EAs becoming historians.
EDIT: Jamie Harris suggests some of the Sentience Institute's research as potentially another counterexample to that "recurring theme", which sounds right to me.
There are also other considerations that push in favour of or against taking up projects or career pathways that haven’t yet been taken up by many EAs (including but not limited to history research). For example, doing that could provide more information value, but conversely could be harder because there’s less impact-focused advice or mentorship available for that pathway. For more on that matter, see Some promising career ideas beyond 80,000 Hours' priority paths and Thoughts on doing good through non-standard EA career pathways.
People who are considering doing EA-aligned research might find it useful to watch the EAG 2018 talk From the Neolithic Revolution to the Far Future: How to do EA History.
Finally, as mentioned earlier, this post should be seen merely as a starting point, and I’d encourage people to comment to mention additional topics, their thoughts or criticisms regarding anything I say here, or additional general thoughts on the intersection of history research and EA.
For a wider range of potentially valuable research projects one could do, see A central directory for open research questions.
People interested in this post may also be interested in 80k's recent interview with Tom Moynihan "on why prior generations missed some of the biggest priorities of all". Here's a relevant excerpt:
---
Rob Wiblin: Let’s push on and talk about intellectual history as a high-impact career. So yeah, as far as I know, we’ve never had a historian on the show before, which obviously means we haven’t had an intellectual historian either. And I think you’re reasonably familiar with 80,000 Hours’ goal of trying to help people have a bigger social impact with their career. Do you think some listeners should consider intellectual history as a potentially valuable career path to go on? And if so, why?
Tom Moynihan: Yeah, so I think that insofar as longtermism, EA-aligned with longtermism is about affecting the far future, trying to shape it positively, I think that there is actually a good case for history — not necessarily intellectual history, but history broadly — to play a bigger role in this new way of approaching priorities based on long arcs of history. So I do think that it can be impactful in the sense that it can actually derive lots of information value, so you can gain these nice insights and these things that we’ve been talking about a lot of, you almost get a sense of the heuristics of background assumptions or ‘crucial considerations.’ So that’s the term that Bostrom uses to basically describe a piece of information or knowledge that changes your whole priorities. So I think the example he uses is, if you’re lost in the woods and you’re using a compass, then you realize your compass is broken, that’s a crucial consideration.
Rob Wiblin: I guess the idea being that it doesn’t just mean that you should go one degree to the right, it means that everything is thrown into doubt.
Tom Moynihan: Exactly. So I think that it can be high-impact in the sense that there is a lot of information value to be derived here. I’ve noticed, there’s a post on the EA Forum of potentially valuable research areas in history, I think those are all brilliant. I think there was also an 80,000 Hours post talking about non-standard careers outside of the major priority areas, and one of them was a historian with a long-term arc of history specialization. Yeah, and so I think that there’s definitely a scope for this. I think that historians tend to not be EA-aligned, so there’s value for EAs to go and become historians and figure out the useful stuff. However, it is a non-standard career, and it’s also highly competitive and risky. If you want to reliably have an impact, it’s definitely not one to be advised. But if you want to take a big risk, maybe yeah.
Rob Wiblin: I think that’s probably the case with almost all academic or research careers — that, again, it’s hits based. Most researchers don’t have a massive social impact, but some of them really make important discoveries. I guess I would encourage people to make peace with that, rather than just try to play it super safe, because that limits your options so severely.
Tom Moynihan: Yeah, so I think that that’s the basic message that I want to give, is that, I think there’s scope for a lot of valuable information to be mined here, but the problem with that is that you don’t know what information… Particularly when you’re trying to find these long-term arcs or these stories about intellectual, moral, material, economic progress, you don’t know what you’re looking for ahead of time. I know that applies to almost all search functions in a sense, but you don’t know if it’s going to pay out in the end. So yeah, I do think because of that non-standardness and riskiness, I think that’s definitely something to consider. But I do think that, to put it simply, in EA and longtermism there’s so much history, there’s so much talk of history. The hinge of history, moral circle expansion, these are all historical ideas, progress itself is a historical idea. So there’s definite scope to do valuable work here.
[...]
Rob Wiblin: Are there any particularly valuable questions within intellectual history that you’d like to see people investigate that haven’t already come up? We’ve talked about quite a few.
Tom Moynihan: So, I think the question of the history of moral circle expansion and what are the causes that shift people’s intuitions outside of the kind of baseline of prejudice. Because you can go back and find people arguing for various very forward-thinking moral positions earlier than that actually spilled out and became a wider movement or a wider cause. So, something must’ve happened to create that critical mass. I think that that could be interesting. The Needham question that I spoke about earlier, this question of why one civilization you have over here, it’s actually far more technologically advanced…how they have something like a steam engine to open doors in the palaces, but just decide not to use it elsewhere. And then you have another civilization where something happens there that means that science locks in as an institution that perpetuates itself, perpetuates knowledge in a unique way.
Tom Moynihan: What are the institutions behind that? I think that there’s interesting research to be done there. Of course, and this is one that I’ve seen in various places, is researching the dynamics of lock-in. So, instances in the past where we can see clear path dependence in culture, values, et cetera. Also, again, another obvious one is studying the rise and fall of civilizations. What creates civilizational resilience? I think that making inductions from the past is dangerous because when civilizations have risen and fallen in the past, often it wasn’t a globalized technological civilization in the same way we have today, but people who have worked in this field have already kind of made that point. Tech trees, I think, are a really important and interesting place to try and look at. In a sense recreating the evolutionary tree of life, trying to do that for technology. And then that leads me to the final one that I find really interesting.
Tom Moynihan: And this comes from an idea that I got from a researcher Karim Jebari. He has, I think it’s a preprint paper currently, but it’s called Replaying history’s tape. And he’s taking those ideas of contingency and convergence from the biological sciences and seeing if they can be meaningfully applied to civilization and cultural progress and technological progress. So, as I was saying earlier, I think we do tend to overestimate the convergence or the recurrence or repeatability of a lot of insights, ideas, technologies. And the line is of course blurred. I would really love a map of cultural progress across different cultures, civilizations, and trying to map how convergent some might be.
Tom Moynihan: And obviously the way of measuring this, and Jebari kind of puts this forward in the paper, is that if you can see a cultural practice appearing independently in lots of places, you can kind of presume that it’s convergent in the same way as evolution. It becomes interesting because then when you get a more globalized society, it can appear in one place and then spread. So, there’s lots of interesting questions there I think to be had. And then, again, that can affect our judgments of how severe certain collapse events or very destructive global catastrophic risks are. So, I think there’s lots to be done there.