Hide table of contents

In the recent article Some promising career ideas beyond 80,000 Hours' priority paths, Arden Koehler (on behalf of the 80,000 Hours team) highlights the pathway “Become a historian focusing on large societal trends, inflection points, progress, or collapse”. I share the view that historical research is plausibly highly impactful, and I’d be excited to see more people explore that area.

I commented on that article to list some history topics I’d be excited to see people investigate, as well as to provide some general thoughts on the intersection of history research and effective altruism. Arden suggested I adapt that comment into a top-level post, which led me to write this.

Note that:

  • As zdgroff points out, you don’t actually have to be a historian to do this sort of historical research. (I’d add that you don’t even necessarily have to be in academia at all.)
  • I’m sure there’s at least some relevant existing work on each of these topics. What I’m suggesting is that it seems likely there’s room for more work, work better targeted towards informing decisions in areas EAs care about, and/or summaries and syntheses of existing work (for EAs unfamiliar with that work).
  • I have basically no background in academic history myself, am only ~6 months into my EA-aligned research career, and wrote this post fairly quickly.
  • I lean towards longtermism, which influenced which history topics came to mind for me.
  • Thus, this post should be seen merely as a starting point. I expect I’ve failed to include some topics it could be valuable to investigate.
  • I’d therefore be really keen to see people comment on this post to mention additional topics, their thoughts or criticisms regarding anything I say here, or additional general thoughts on the intersection of history research and EA.

10 history topics it might be very valuable to investigate

(Note: The article Some promising career ideas beyond 80,000 Hours' priority paths also mentions something similar to the 1st and 3rd of these topics.)

1. The history of various types of growth and progress (economic, intellectual, technological, moral, political, etc.)

Investigations into this topic could give us evidence about:

On economic growth, see here and here.

I'd include as part of this topic research into trends in various forms of violence over time. See e.g. The Better Angels of Our Nature and What are the implications of the offence-defence balance for trajectories of violence?

2. The history of societal collapse and recovery

Investigations into this topic could provide evidence about things like how high existential and global catastrophic risks are, how likely humanity is to recover from a collapse, how civilization might be changed by the process of collapse and recovery, and what we can do to reduce chances of collapse and/or increases chances of a positive recovery.

Some relevant sources can be found here.

3. The history of the growth, influence, collapse, etc. of various social and intellectual movements

Investigations into this topic could provide evidence relevant to what might happen to the EA movement or related movements (e.g., the rationality, animal advocacy, and AI safety communities). That could in turn help us assess how valuable an intervention that relies on the continued presence of a particular movement is, how much we should prioritise activities that would be robust to some degree of movement collapse, how valuable movement-building activities are, and what our philanthropic discount rate should be.

In addition to informing our predictions of how certain movements might grow, have influence, collapse, etc., investigations of this topic could inform our efforts to positively influence those processes. For example, if we learn more about what factors seem to have often made the collapse of movements somewhat similar to EA more likely, we can try to avoid or counteract such factors.

Some relevant sources can be found here.

4. The history of efforts to regulate technology (or otherwise influence the direction or applications of technological development)

See Grace and Grace. (I haven’t properly read those works, but they seem relevant to this topic, as well as the next topic topic.)

See here for sources related to differential progress, differential intellectual progress, and/or differential technological development.

5. The history of proliferation and nonproliferation efforts in the case of nuclear weapons or other weapons/technologies

This is of course related to the previous topic.

6. The history of predictions (especially long-range predictions and predictions of things like extinction), millenarianism, and how often people have been right vs wrong about these and other things

Investigations into this topic could give us evidence relevant to how much to trust predictions of various kinds, which is relevant to things like whether we're at the Hinge of History and how high existential risk is. We currently seem to know very little about this. See e.g. Muehlhauser, Aird, and Aird.

7. The history of moral circle expansion

Investigations into this topic could inform future efforts to expand moral circles along various dimensions (e.g., to nonhuman animals, to future humans, to future digital minds). Such investigations could also perhaps inform us on questions like how good the future is likely to be “by default”, and how much we should prioritise preventing extinction vs improving humanity’s likely trajectory conditional on survival (see Crucial questions for longtermists: Overview).

See also On the longtermist case for working on farmed animals [Uncertainties & research ideas]. And see here for some relevant sources.

8. The history of legal and political efforts to represent or benefit various neglected populations (future generations, animals, slaves, etc.)

Investigations into this topic could help us assess how much we should prioritise more efforts of this kind, and how best to implement such efforts. Additionally, as with investigations of the history of moral circle expansion, investigations of this topic could also perhaps inform us on questions like how good the future is likely to be “by default”, and how much we should prioritise preventing extinction vs improving humanity’s likely trajectory conditional on survival.

I expect some related work has been done by Tyler John (who has written Longtermist Institutional Design and Policy: A Literature Review) and the UK’s All-Party Parliamentary Group for Future Generations. But I don’t actually know the details of these people’s work.

9. Counterfactual history related to what factors might’ve led various totalitarian regimes to last a long time, and how long they might’ve lasted if those factors had been present

Relevant regimes include Nazi Germany and the Soviet Union.

Investigations of this topic could inform how high the risk from dystopias/totalitarianism is, and how we can reduce that risk.

I'd guess that mainstream historians won’t have neglected the question of what factors might have led those regimes to last, but will have neglected the question of just how long those regimes could’ve lasted. But that’s purely a guess.

See here for some relevant sources.

10. The history of risks and harms from individuals with above-average levels of various psychological traits (e.g., sadism, psychopathy, narcissism, machiavellianism)

For an idea of why this topic might be important, what some key questions might be, and what decisions could be informed by research into this topic, see Reducing long-term risks from malevolent actors.

This might involve looking into:

  • The risks and harms caused by individuals like Hitler, Stalin, Mao, and Genghis Khan
  • Whether these individuals indeed seem to have had high levels of relevant psychological traits, and whether those high levels seem to have preceded or followed them gaining power
  • What conditions or institutions seem to have been successful in limiting risks and harms from such individuals in certain times and places
  • Other such things.

(I may soon start doing research somewhat related to this topic. So if this topic seems interesting to you, feel free to get in touch.)

General thoughts on the intersection of history research and EA

From what I’ve seen, it seems that a recurring theme is that:

  1. EAs without a background in history have done relatively brief analyses of many of the above topics.
    • Some such analyses can be found via some of the above links.
    • To be clear, I’m not saying these analyses were bad, and in fact I’ve quite appreciated many of them.
  2. Other people have found those analyses very interesting, and have possibly made big decisions based on them.
  3. But there’s been no deeper or more rigorous follow-up analysis.

And I think there are also some of those topics, or some subtopics, that haven’t even had a brief analysis from EAs.

I know less about how neglected these topics are within mainstream academia. But it seems likely that there’s at least room for summaries and syntheses for EAs, and/or investigations that are better targeted towards informing decisions in areas EAs care about.

I'd therefore be quite excited to see more people in EA (or at least interacting with EA; see Community vs Network) who are skilled at and interested in history research. As noted above, such people could be historians, but could also be other academics or even people outside of academia.

A potential counterexample to the above “recurring theme” is AI Impacts' research into “historic cases of discontinuously fast technological progress”. My understanding is that that research has indeed been done by EAs without a background in history, but also that it seems quite thorough and rigorous, and possibly more useful for informing key decisions on that topic than work on that topic by most academic historians would’ve been. (But I hold that view very tentatively, and haven’t looked into that work in great detail.) I'm not sure if that's evidence for or against the value of EAs becoming historians.

(EDIT: Jamie Harris suggests some of the Sentience Institute's research as potentially another counterexample to that "recurring theme", which sounds right to me.)

There are also other considerations that push in favour of or against taking up projects or career pathways that haven’t yet been taken up by many EAs (including but not limited to history research). For example, doing that could provide more information value, but conversely could be harder because there’s less impact-focused advice or mentorship available for that pathway. For more on that matter, see Some promising career ideas beyond 80,000 Hours' priority paths and Thoughts on doing good through non-standard EA career pathways.

People who are considering doing EA-aligned research might find it useful to watch the EAG 2018 talk From the Neolithic Revolution to the Far Future: How to do EA History.

Finally, as mentioned earlier, this post should be seen merely as a starting point, and I’d encourage people to comment to mention additional topics, their thoughts or criticisms regarding anything I say here, or additional general thoughts on the intersection of history research and EA.

For a wider range of potentially valuable research projects one could do, see A central directory for open research questions.

Comments34
Sorted by Click to highlight new comments since: Today at 3:45 PM

I'm excited to see this post! Thanks for the suggestions. A few I hadn't considered. In general though, this is an area I've thought about in various ways, at various points, so here's my list of an additional "9 history topics it might be very valuable to investigate" (with some overlap with your list)!


I'll start with some examples of categories of historical projects we've worked on at Sentience Institute.

1. The history of past social movements

Some overlap with your categories 3 and 8. This is to inform social movement strategy. At Sentience Institute, we've been focusing on movements that are 1) relatively recent, and 2) driven by allies, rather than the intended beneficiaries of the movement. This is because we are focusing on strategic lessons for the farmed animal movement, although I've recently been thinking about how it is applicable to other forms of moral circle expansion work, e.g. for artificial sentience (I have a literature review of writings on this coming out soonish).

Conducted by SI:

Not conducted by SI, but highly relevant:

I've written a fuller post about "What Can the Farmed Animal Movement Learn from History" which discusses some methodological considerations; some of the discussion could be relevant to almost any "What can we learn about X from history" questions of interest to the EA movement. (As a talk here)


2. The history of new technologies, the industries around them, and efforts to regulate them.

This overlaps with your category 4. Sentience Institute's interest has been in learning strategic lessons for the field of cellular agriculture, cultured meat, and highly meat-like plant-based foods, to increase the likelihood that these technologies are successfully brought to market and to maximise the effects that these technologies have on displacing animal products.

Conducted by SI:


3. Assessing the tractability of changing the course of human history by looking at historical trajectory shifts (or attempts at them).

Covered briefly in this post I wrote on "How tractable is changing the course of history?" (March 12, 2019). I didn't do it very systematically. I was trying to establish the extent to which the major historical trajectory shifts that I examined were influenced by 1) thoughtful actors, 2) hard-to-influence indirect or long-term factors, 3) contingency, i.e. luck plus hard-to-influence snap decisions by other actors.

One approach could be to create (crowdsource?) a large list of possible historical trajectory shifts to investigate. Then pick them based on: 1) a balance of types of shift, covering each of military, technological, and social trajectory shifts, aiming for representativeness 2) a balance of magnitudes of the shifts, 3) time since the shift, 4) availability of evidence.

Some useful feedback and suggestions I had when I presented this work to a workshop by the Global Priorities Institute:

  • Gustav Arrhenius of Institute of Future Studies suggested to me that there was more rigorous discussion of grand historical theories than I was implying in that post. He recommended reading works by Pontus Strimling of the Institute of Future Studies, plus work by Jerry Cohen on Marxism plus by Marvin Harris on cultural materialism.)
  • Christian Tarsney (GPI) suggested that a greater case for tractability is in shaping the aftermath of big historical events (e.g. world wars) rather than in causing the those major events to occur.
  • William MacAskill (GPI) suggested that rather than seeking out any/all types of trajectory shifts, it might be more useful to look specifically for times where individuals knew what they wanted to change and then investigating whether they were able to do that or not. e.g. what's the "EA" ask for people at the time of the French Revolution? It's hard to know what would have been useful. There might be cases to study where people had a clearer ideas about how to shape the world for the better, e.g. in contributing to the writing of the bible.

Some other topics I've thought about much more briefly:


4. The history of the growth, influence, collapse, etc. of various intellectual and academic movements.

Overlaps with your category 3. I think of this as quite different to the history of social movements. Separately from direct advocacy efforts, EA is full of ideas of research fields that could be built or developed. The ones I'm most familiar with are "global priorities research," "welfare biology," and "AI welfare science" but I'm sure there are either more now, or there will be soon, as EAs explore new areas. For example, there were new suggestions in David Althaus and Tobias Baumann, "Reducing long-term risks from malevolent actors" (April 29, 2020). So working out how to most effectively encourage the growth and success of research fields seems likely to be helpful


Various historical research to help to clarify particular risk factors for s-risks will materialise in the future

These could each be categories on their own. Examples include:

  • 5. To what extent have past societies prioritised the reduction of risks of high amounts of suffering and how successful have these efforts been?
  • 6. Historical studies of "polarisation and divergence of values."
  • 7. "Case studies of cooperation failures" and other factors affecting the "likelihood and nature of conflict" (some overlap with your category 5. This was suggested by CLR. I had a conversation with Ashwin Acharya who also seemed interested in this avenue of research)
  • 8. Study how other instances of formal research have influenced (or failed to influence) critical real-world decisions (suggested by CLR.)

9. Perhaps lower priority, but broader studies of the history of various institutions

The focus here would be on building an understanding of the factors that influence their durability. E.g. at a talk at a GPI workshop I attended, someone (Phillip Trammel? Anders Sandberg?) noted a bunch of types of institutions that have had some examples endure for centuries: churches, religions, royalty, militaries, banks, and corporations. Why have these institution types been able to last where others have not? Within those categories, why have some lasted where others have not.


Other comments and caveats:

  • Hopefully SI's work offers a second example of an exception to the "recurring theme" you note in that 1) SI's case studies are effectively a "deeper or more rigorous follow-up analysis" after ACE's social movement case study project -- if anything, I worry that they're too deep and rigorous and that this has drastically cut down the number of people who put the time into reading them, and 2) I at least had an undergraduate degree in history :D
  • On the "background in history" thing, my guess is that social scientists will usually actually be better placed to do this sort of work, rather than historians. (Some relevant considerations here)
  • Any of these topics could probably be covered briefly, with low rigour, in ~one month's worth of work (roughly the timeframe of my tractability post, for example), or could literally use up several lifetime's worth of work. It's a tough call to decide how much time is worth spending on each case study. Some sort of time capping approach could be useful.
  • Relatedly, at some point, you face the decision of how to aggregate findings and analyse across different movements. I think we're close to this with the first two research avenues I mention that we've been pursuing at SI. So if anyone reading this has ideas about how to pursue this further, I'd be interested in having a chat!
  • Many of the topics discussed here are relevant to Sentience Institute's research interests. If you share those interests, you could apply for our researcher opening at the moment.
  • To write this post I've essentially just looked back through various notes I have, rather than trying to start from scratch and think up any and all topics that could be useful. So there's probably lots we're both missing, and I echo the call for people to think about areas where historical research could be useful.
  • It's long been on my to-do list to go through GPI and CLR's research agendas more thoroughly to work out if there are other suggestions for historical research on there. I haven't done that to make this post so I may have missed things.
  • I was told that the Centre for the Governance of AI's research agenda has lots of suggestions of historical case studies that could be useful, though I haven't looked through this yet.
  • These topics probably vary widely in terms of the cost-effectiveness of time spent researching them. Of course, this will depend on your views on cause prioritisation.
  • Once I've looked into the above lists and thought about this more, I might improve this comment and make my own top-level post at some point. I was planning to do that at some point anyway but you forced my hand (in a good way) by making your own post.
  • I'm definitely interested in your interest in research for topic 10 on your list, so please keep me in the loop!

Thanks for sharing those topic ideas, links to resources, and general thoughts on the intersection of history research and EA! I think this post is made substantially more useful by now having your comment attached. And your comment has also further increased how excited I'd be to see more EA-aligned history research (with the caveats that this doesn't necessarily require having a history background, and that I'm not carefully thinking through how to prioritise this against other useful things EAs could be doing).

If you do end up making a top-level post related to your comment, please do comment about it here and on the central directory of open research questions.

It's long been on my to-do list to go through GPI and CLR's research agendas more thoroughly to work out if there are other suggestions for historical research on there. I haven't done that to make this post so I may have missed things.

Yeah, that sounds valuable. I generated my list of 10 topics basically just "off the top of my head", without looking at various research agendas for questions/topics for which history is highly relevant. So doing that would likely be a relatively simple step to make a better, fuller version of a list like this.

Hopefully SI's work offers a second example of an exception to the "recurring theme" you note in that 1) SI's case studies are effectively a "deeper or more rigorous follow-up analysis" after ACE's social movement case study project -- if anything, I worry that they're too deep and rigorous and that this has drastically cut down the number of people who put the time into reading them, and 2) I at least had an undergraduate degree in history :D

Yeah, that makes sense to me. I've now edited in a mention of SI after AI Impacts. I hadn't actively decided against mentioning SI, just didn't think to do so. And the reason for that is probably just that I haven't read much of that work. (Which in turn is probably because (a) I lean longtermist but don't prioritise s-risks over x-risks, so the work by SI that seems most directly intended to improve farm animal advocacy seems to me valuable but not a top priority for my own learning, and (b) I think not much of that work has been posted to the Forum?) But I read and enjoyed "How tractable is changing the course of history?", and the rest of what you describe sounds cool and relevant.

Focusing in on "I worry that they're too deep and rigorous and that this has drastically cut down the number of people who put the time into reading them" - do you think that that can't be resolved by e.g. cross-posting "executive summaries" to the EA Forum, so that people at least read those? (Genuine question; I'm working on developing my thoughts on how best to do and disseminate research.)

Also, that last point reminds me of another half-baked thought I've had but forgot to mention in this post: Perhaps the value of people who've done such history research won't entirely or primarily be in the write-ups which people can then read, but rather in EA then having "resident experts" on various historical topics and methodologies, who can be the "go-to person" for tailored recommendations and insights regarding specific decisions, other research projects, etc. Do you have thoughts on that (rather vague) hypothesis? For example, maybe even if few people read SI's work on those topics, if they at least know that SI did that research, they can come to SI when they have specific, relevant questions and thereby get a bunch of useful input in a quick, personalised way.

(This general idea could also perhaps apply to research more broadly, not just to history research for EA, but that's the context in which I've thought about it recently.)

Thanks! And, of course, I understand that our lists look different in part because of the different cause areas that we've each spent more time thinking about. Glad we could complement each others' lists.

Focusing in on "I worry that they're too deep and rigorous and that this has drastically cut down the number of people who put the time into reading them" - do you think that that can't be resolved by e.g. cross-posting "executive summaries" to the EA Forum, so that people at least read those? (Genuine question; I'm working on developing my thoughts on how best to do and disseminate research.)

Huh, weird, I'm not sure why I didn't do that for either of the case studies I've done so far -- I've certainly done it for other projects. At some point, I was thinking that I might write some sort of summary post (a little like this one, for our tech adoption case studies) or do some sort of analysis of common themes etc, which I think would be much more easily readable and usable. I'd definitely post that to the Forum. I don't think posting to the forum would make a lot of difference though, for us. This is mainly because my impression / intuition is that people who identify with EA and are focused on animal advocacy use the EA Forum less than people who identify with EA and are focused on extinction risk reduction, so it wouldn't increase the reach to the main intended audience much over just posting the research to the Effective Animal Advocacy - Discussion Facebook group and our newsletter. But that concern probably doesn't apply to many of the suggestions in your initial list.

Perhaps the value of people who've done such history research won't entirely or primarily be in the write-ups which people can then read, but rather in EA then having "resident experts" on various historical topics and methodologies, who can be the "go-to person" for tailored recommendations and insights regarding specific decisions, other research projects, etc.

I think there's some value in that. A few concerns jump to mind:

  • Historical case studies tend to provide weak evidence for a bunch of different strategic questions. So while they might not single-handedly "resolve" some important debate or tradeoff, they should alter views on a number of different questions. So a lot of this value will just be missed if people don't actually read the case studies themselves (or at least read a summary).
  • While I think I'm pretty good at doing these case studies to a relatively high standard in a relatively short amount of time (i.e. uncovering/summarising the empirical evidence), I don't think I'm much better placed than anyone else to interpret what the evidence should suggest for individual decisions that an advocate or organisation might face.
  • In practice, I've hardly ever had people actually ask me for this sort of summary or recommendation. Off the top of my head, I can only think of two occasions where this has happened.

If you do end up making a top-level post related to your comment, please do comment about it here and on the central directory of open research questions.

Slight tangent from the discussion here, but you might like to add "and their summary of "Foundational Questions for Effective Animal Advocacy" after where you've listed SI's research agenda on that post. This is essentially a list of the key strategic issues in animal advocacy that we think could/should be explored through further research. Once I've published my literature review on artificial sentience, I'd be keen to add that too, since that contains a large list of potential further research topics.

Thanks for those answers and thoughts!

And good idea to add the Foundational Questions link to the directory - I've now done so.

Mini meta tangent: Part of me wanted to call this “10 history topics it might be very valuable to investigate”. But I felt like maybe it’s good for EA to have a norm against that sort of listicle-style title, which promises a specific number of things, as such titles seem to be oddly enticing. It seems like maybe posts with that sort of title would grab more attention, relative to other EA Forum posts, than they really warrant. (I don't mean that any article with such a title would warrant little attention, just that they might get a "unfair boost" relative to other posts.)

I think my feeling on that was informed in part by Scott Alexander's writing on asymmetric weapons, in which he says, among other things:

Logical debate has one advantage over narrative, rhetoric, and violence: it’s an asymmetric weapon. That is, it’s a weapon which is stronger in the hands of the good guys than in the hands of the bad guys.

In this case, it's not about good guys vs bad guys, but about more useful vs less useful posts. Perhaps we should try to minimise the number of things that boost the attention an article gets other than things that closely track how useful the article is.

Meanwhile, I recently published a post I called 3 suggestions about jargon in EA. Maybe, with this in mind, I should’ve called that “Some suggestions about jargon in EA”, to avoid grabbing more attention than it warranted. (I didn't really think about this issue when I posted that, for some reason.)

Does anyone else have thoughts on whether EA should have a norm against listicle-style numbered titles, or on whether we already implicitly do have such a norm?

(By the way, I didn’t specifically aim to have 10 history topics in this post; it just happened to be that 9 initially came to mind, and then later I was thinking about the malevolence post so I added a 10th topic related to that.)

We also had this choice with our other problems and other paths posts, and decided against the listicle style, basically for the reasons you say. I think there is a nacent/weak norm, and think it makes sense to uphold it. The main argument against is that is actually kind of helpful to know if something is a long list or a short list -- esp if I have a small bit of time and won't want to start something long.

Yeah, so

1) I think announcing the size of a list ahead of time is a net good, and 
2) I prefer relevant numbers to vague words. On balance I think a listicle-style numbering system is better than ambiguous counting words like "some", "several", "many", etc.
3) I don't find it very plausible that a straightforward declaration of the size of a list tricks people into reading things they otherwise ought not to have (while I agree for phrases like "Number 5 will SHOCK you," or outrage bait)

One reason against listicles-style posts for 80k is that they're likely seen as lower status with your target audience, and 80k has significant image/PR considerations for your public output, an issue that I think is relatively much less important for the EA Forum.

[My comment kind-of seems insanely long for this topic, but "what to call a post" is a decision I'll probably face hundreds of times in future, so it seems worth thinking about it more.]

I expect we disagree, though I'm not sure how much, and I'd be interested in trying to find the crux and seeing whether one of us will end up changing our minds.

First, I'll gesture at my broader views on related topics by adapting a comment I wrote in a doc. Though this comment isn't specifically on numbered list titles, and I'd say different things about numbered list titles specifically. (This comment was prompted by someone suggesting the doc be given a more "fun" title.)

I agree that "fun" titles get more readership, but I think that this is bad and should not be exploited, and I think that it's good that the Forum already has more of a mild norm against exploiting this.

To expand: I think our goal should be for people to (1) read whatever is most useful to them, or perhaps whatever they can provide the most value to (via comments, contacting the author to share thoughts, etc.), and also for people to (2) spend the appropriate amount of time on reading vs other activities.

For that goal, it could help to be more attention-grabbing/fun, if either (a) we have good reason to think our thing is what's best for them to read but that they'll fail to appreciate that by default, or (b) people are reading things like Forum posts less than they should (e.g., they get bored and go play video games).

But I think we should be skeptical about (a), just as we'd be skeptical of other people saying it. And I don't think (b) is as big an issue as the choice of what to read (plus some people may read the Forum more than they should, though I'd guess that's less common than reading it less than they should, even among EAs).

So I think we should aim for titles to basically just make the scope and purpose clear and not be especially boring or long.

If we can get more attention-grabbing-ness without sacrificing clarity of purpose and scope, that could sometimes be worthwhile, but I'm wary even of that. E.g., for this reason, I called a post "Some history topics it might be very valuable to investigate" rather than "10 history topics it might be very valuable to investigate", even though it literally did happen to be 10.

And if we'd have to sacrifice clarity, then I think that's very rarely worthwhile.

Relatedly, I think the Forum's mild norm against exploiting attention-grabbing-ness is a good thing and should be maintained. Even if we believed (a) and it was true, probably a bunch of other people would believe similar things about themselves. And we're all better off if we just make it as easy as possible for readers to work out what it'd make sense for them to read, rather than trying to entice them or grab their attention.

I think it's sort-of like we're in a community-level prisoner's dilemma, and currently we're cooperating, and we should stick with that.

(Here I partly have in mind Scott Alexander's writing on asymmetric weapons:"Logical debate has one advantage over narrative, rhetoric, and violence: it’s an asymmetric weapon. That is, it’s a weapon which is stronger in the hands of the good guys than in the hands of the bad guys.")

And in this case, I think [person's] suggestion would indeed sacrifice clarity - [reason why the "fun" title would make the scope and purpose of the post less immediately clear than other titles].

---

So I guess there were really three key downsides I was pointing to in that comment:

  1. Optimising partly for attention-grabbing-ness in titles might in expectation worsen readers' choices about what to read and how long to spend reading things in a given venue, because it sometimes trades of against being clear about the purpose and scope of a piece
  2. Optimising partly for attention-grabbing-ness in titles might in expectation worsen readers' choices about what to read and how long to spend reading things in a given venue, because it introduces an "attractor" that isn't necessarily at all correlated with how worth-reading something is for a given person
  3. Optimising partly for attention-grabbing-ness is kind-of like defecting on a prisoner's dilemma, and may lead other people to do the same, which could be bad even if it really is true that what "you" are writing is especially worthy of readers' attention and "your" title choice would still be clear

When I wrote the above comment, it was in a situation where I think all 3 of those downsides applied. Just swapping "Some [blah]" for "10 [blah]" doesn't face downside 1, which makes it much better. So I'm probably actually more ok with numbered list titles than some other types of "fun" titles (e.g., off the top of my head, calling this post "History, the long-term, and you" or "History! What is it good for?"). 

But I think there's also a fourth possible downside:

  • 4. Using a numbered list title for a post may also make you, or other people, more likely to write posts suited to numbered list formats, or squeeze other posts into that format (and maybe into neat or not-too-high numbers). This would happen if numbered list titles increased attention in expectation, and so people were incentivised to use them, and perhaps even would simply struggle to "compete" for attention if they don't use them in an environment where everyone else is.

So I think, when it comes to numbered list titles for posts that do fit that mould, the possible downsides are 2, 3, and 4, and how strong those downsides are depends to a significant extent (though not entirely) on the extent to which numbered list titles grab attention in a way that isn't correlated with what's useful for the reader or where the reader can provide value. 

So, if you basically agree with the above framings in theory, maybe the crux between you and I is just the empirical question of the extent to which numbered list titles do that. (Related to your statement "(3) I don't find it very plausible that a straightforward declaration of the size of a list tricks people into reading things they otherwise ought not to have (while I agree for phrases like "Number 5 will SHOCK you," or outrage bait)". Though I'm not sure I like the word "tricks".)

Do you think that's the crux? If so, maybe google could quickly reveal answers to that empirical question? 

---

Also, do you think you agree with my views when it comes to "fun" titles that are unclear as to purpose and scope (e.g., "History, the long-term future, and you"), separate from the question of numbered list titles?

---

(Also, I should note that all of this is specifically in the context of the EA Forum - I'm more open to "attention-grabbing" strategies in other contexts. I could elaborate if readers are interested, but Linch and I already discussed that separately.)

Hmm, I have two somewhat different claims first, before engaging with the structure of your argument.

  1. The numbered list format is actually just a great communicative technology, both in general and especially well-adapted to the internet age.
  2. In a nonadversarial context, honest signaling of traits is all-else-equal very good.

It's possible you already agree with these points, so I will use them as assumptions and won't defend them further here unless requested.

extent to which numbered list titles grab attention in a way that isn't correlated with what's useful for the reader or where the reader can provide value

I think this is a large difference between us, but not by itself the ultimate crux. I think there's another consideration:

extent to which numbered list titles are superior to other titles in their ability to provide value to the user.

I agree that if the costs of numbered list titles is high enough, we probably shouldn't do them. However, I think between us we diverge on the differences in our evaluation of the benefits of numbered list titles as well. 

Broadly, I think numbered list titles are useful honest signaling^ to quickly tell readers whether to engage with a post (and also helpfully whether to stop reading a post, after they read the first point in a list). 

For another example: 

Using a numbered list title for a post may also make you, or other people, more likely to write posts suited to numbered list formats, or squeeze other posts into that format

To me (to the extent that your observation is correct), this is evidence that the marketplace of ideas desire more articles well-suited to numbered list formats, so it's all-else-equal evidence that people ought to write more numbered lists. (Analogously, if it turns out a lot of EA people like listening to podcasts, I think this is evidence that more EAs should make podcasts, even if I personally hate podcasts). So something that to you is a cost I mostly think of as a benefit. (I don't think the EA marketplace of ideas is always accurate, for example it rewards culture war posts more than I perhaps think is fair). 

Ultimately, if it turns out that numbered list titles attract people more to reading posts that they otherwise wouldn't have, and this is a bad choice of their time, I agree that this is usually a bad idea. 

I think my main heuristic for preferring to title numbered lists when the article is essentially a numbered list to begin with is that honest signaling is usually good, both in general and specific to the question of EA articles . There are exceptions of course^^, however I'm not convinced that one of the core exceptions applies here.

^The honest part here is relevant to me, which is why I originally contrasted with the phrase "tricked." 

^^ for example if the signaling is very costly (broadly, getting a PhD or becoming a chess grandmaster just to signal intelligence and conscientiousness, narrowly, spending a bajillion hours to make a post look closer to an academic preprint even when academia is not your target audience), when the signaling causes fixation on a trait we don't care about (broadly, it's bad to include glamor pictures unnecessarily online to take advantage of physical attractiveness; narrowly, it's prima facie bad to include unhelpful Latin phrases to signal erudition)

Hmm. It seems like we've indeed identified the two cruxes. 

Regarding the benefits: I don't see why numbered list titles would help a reader make good decisions about whether to engage with a post? In particular, given that they could in any case use the title (whatever its form), the summary-type thing (which ideally there would be) and/or the first paragraph (if that's different), the word count / scrolling to see how long it seems, and the karma and comments?

Regarding the extent to which numbered list titles grab attention in a way that isn't correlated with what's useful for the reader or where the reader can provide value: Maybe at some point I should look for empirical evidence, or at least better theorising, regarding this. Currently I think we just have different intuitions/anecdata.

In particular, given that they could in any case use the title (whatever its form), the summary-type thing (which ideally there would be) and/or the first paragraph (if that's different), the word count / scrolling to see how long it seems, and the karma and comments?

Maybe I'm missing something, but I feel like this is an always-general argument against all informative title names? 


Maybe at some point I should look for empirical evidence, or at least better theorising, regarding this. Currently I think we just have different intuitions/anecdata.


I agree that we have different intuitions and empirical data may help resolve this.

Maybe I'm missing something, but I feel like this is an always-general argument against all informative title names? 

I don't think so - I think it's quite clear how it's easier for a reader to make good choices about whether to read this post if it's called "Some history topics it might be very valuable to investigate" than if it was called (for example) "Topics" or "History stuff" or "What you can do with history". But I just don't immediately see why changing it from the current title to "10 history topics it might be very valuable to investigate" would help the reader make good choices?

It seems like whether it's about history and whether it's research topics is useful info, but whether it's 3 or 10 or 20 isn't very useful, especially given that I probably could've included roughly the same content under 3 topics or split it up into 20.

And then the word count / scrolling is relevant because, if the consideration is "how long will this take me?", then word count / scrolling seems to address that better than reading one topic and multiplying by the stated number of topics. (The latter requires reading a topic before deciding, and the topics may actually differ in length.)

(Of course, there may be a reason I'm missing; I wouldn't be that surprised if you said one sentence and then I went "Oh yeah, fair point, I should've thought of that.")

Oh, another very broad category of topics that I perhaps should've mentioned explicitly is the history of basically any specific topic EAs care about. E.g., history of concerns about animal welfare, arguments about AI risk and AI safety, the randomista movement, philanthropy ...

People interested in this post may also be interested in 80k's recent interview with Tom Moynihan "on why prior generations missed some of the biggest priorities of all". Here's a relevant excerpt:

---

Rob Wiblin: Let’s push on and talk about intellectual history as a high-impact career. So yeah, as far as I know, we’ve never had a historian on the show before, which obviously means we haven’t had an intellectual historian either. And I think you’re reasonably familiar with 80,000 Hours’ goal of trying to help people have a bigger social impact with their career. Do you think some listeners should consider intellectual history as a potentially valuable career path to go on? And if so, why?

Tom Moynihan: Yeah, so I think that insofar as longtermism, EA-aligned with longtermism is about affecting the far future, trying to shape it positively, I think that there is actually a good case for history — not necessarily intellectual history, but history broadly — to play a bigger role in this new way of approaching priorities based on long arcs of history. So I do think that it can be impactful in the sense that it can actually derive lots of information value, so you can gain these nice insights and these things that we’ve been talking about a lot of, you almost get a sense of the heuristics of background assumptions or ‘crucial considerations.’ So that’s the term that Bostrom uses to basically describe a piece of information or knowledge that changes your whole priorities. So I think the example he uses is, if you’re lost in the woods and you’re using a compass, then you realize your compass is broken, that’s a crucial consideration.

Rob Wiblin: I guess the idea being that it doesn’t just mean that you should go one degree to the right, it means that everything is thrown into doubt.

Tom Moynihan: Exactly. So I think that it can be high-impact in the sense that there is a lot of information value to be derived here. I’ve noticed, there’s a post on the EA Forum of potentially valuable research areas in history, I think those are all brilliant. I think there was also an 80,000 Hours post talking about non-standard careers outside of the major priority areas, and one of them was a historian with a long-term arc of history specialization. Yeah, and so I think that there’s definitely a scope for this. I think that historians tend to not be EA-aligned, so there’s value for EAs to go and become historians and figure out the useful stuff. However, it is a non-standard career, and it’s also highly competitive and risky. If you want to reliably have an impact, it’s definitely not one to be advised. But if you want to take a big risk, maybe yeah.

Rob Wiblin: I think that’s probably the case with almost all academic or research careers — that, again, it’s hits based. Most researchers don’t have a massive social impact, but some of them really make important discoveries. I guess I would encourage people to make peace with that, rather than just try to play it super safe, because that limits your options so severely.

Tom Moynihan: Yeah, so I think that that’s the basic message that I want to give, is that, I think there’s scope for a lot of valuable information to be mined here, but the problem with that is that you don’t know what information… Particularly when you’re trying to find these long-term arcs or these stories about intellectual, moral, material, economic progress, you don’t know what you’re looking for ahead of time. I know that applies to almost all search functions in a sense, but you don’t know if it’s going to pay out in the end. So yeah, I do think because of that non-standardness and riskiness, I think that’s definitely something to consider. But I do think that, to put it simply, in EA and longtermism there’s so much history, there’s so much talk of history. The hinge of history, moral circle expansion, these are all historical ideas, progress itself is a historical idea. So there’s definite scope to do valuable work here.

[...]

Rob Wiblin: Are there any particularly valuable questions within intellectual history that you’d like to see people investigate that haven’t already come up? We’ve talked about quite a few.

Tom Moynihan: So, I think the question of the history of moral circle expansion and what are the causes that shift people’s intuitions outside of the kind of baseline of prejudice. Because you can go back and find people arguing for various very forward-thinking moral positions earlier than that actually spilled out and became a wider movement or a wider cause. So, something must’ve happened to create that critical mass. I think that that could be interesting. The Needham question that I spoke about earlier, this question of why one civilization you have over here, it’s actually far more technologically advanced…how they have something like a steam engine to open doors in the palaces, but just decide not to use it elsewhere. And then you have another civilization where something happens there that means that science locks in as an institution that perpetuates itself, perpetuates knowledge in a unique way.

Tom Moynihan: What are the institutions behind that? I think that there’s interesting research to be done there. Of course, and this is one that I’ve seen in various places, is researching the dynamics of lock-in. So, instances in the past where we can see clear path dependence in culture, values, et cetera. Also, again, another obvious one is studying the rise and fall of civilizations. What creates civilizational resilience? I think that making inductions from the past is dangerous because when civilizations have risen and fallen in the past, often it wasn’t a globalized technological civilization in the same way we have today, but people who have worked in this field have already kind of made that point. Tech trees, I think, are a really important and interesting place to try and look at. In a sense recreating the evolutionary tree of life, trying to do that for technology. And then that leads me to the final one that I find really interesting.

Tom Moynihan: And this comes from an idea that I got from a researcher Karim Jebari. He has, I think it’s a preprint paper currently, but it’s called Replaying history’s tape. And he’s taking those ideas of contingency and convergence from the biological sciences and seeing if they can be meaningfully applied to civilization and cultural progress and technological progress. So, as I was saying earlier, I think we do tend to overestimate the convergence or the recurrence or repeatability of a lot of insights, ideas, technologies. And the line is of course blurred. I would really love a map of cultural progress across different cultures, civilizations, and trying to map how convergent some might be.

Tom Moynihan: And obviously the way of measuring this, and Jebari kind of puts this forward in the paper, is that if you can see a cultural practice appearing independently in lots of places, you can kind of presume that it’s convergent in the same way as evolution. It becomes interesting because then when you get a more globalized society, it can appear in one place and then spread. So, there’s lots of interesting questions there I think to be had. And then, again, that can affect our judgments of how severe certain collapse events or very destructive global catastrophic risks are. So, I think there’s lots to be done there.

Comment that came in from the EA Newsletter:

"I’m writing a PhD on alumni engagement with effective altruism as the philosophical background. I’m comparing six top 100 ranked universities in the world and their alumni engagement. The universities are Harvard, Penn State, Cambridge, Vienna, Uppsala, Helsinki. I am interested in any seminar or discussions about implementing effective altruism and historical research as I have been doing that myself for the past four years. I’m writing the PhD for the University of Helsinki for professor Laura Kolbe, I myself live on the Åland Islands." 

Pia Widén (pia.widen at aland.net)

Thanks for sharing that, Aaron :)

Pia also joined (and introduced themselves in) the History and Effective Altruism Facebook group. Hopefully someone there or here can connect with her regarding the intersection of EA and historical research. And I can imagine a seminar/event on that being cool - I'd be keen to attend that as well if someone sets it up!

Great write up, though I feel slight regret reading it as there are now a further 10 things in my life to be annoyed I don't know more about!


Maybe it would be valuable to try crowdsourcing research such as this?

Start a shared g-suite document where we can coordinate and collaborate. I would find it fairly fun to research one of these topics in my free time, but doubt I commit the full energy it requires to produce a thorough analysis.

I could write myself up publicly somewhere others can see, that I'm willing to work 7 hours a week, on eg. studying societal collapse. Then someone else looking to do the same, can coordinate and collaborate with me, and we could potentially produce a much better output.

Even if collaboration turns out to be unfruitful, coordination might at least prevent double work.

That definitely sounds good to me. My personal impression is that there are many EAs who could be doing some good research on-the-side (in a volunteer-type capacity), and many research questions worth digging into, and that we should therefore be able to match these people with these questions and get great stuff stuff. And it seems good to have some sort of way of coordinating that.

Though I also get the impression that this is harder than it sounds, for reasons I don't fully understand, and that mentorship (rather than just collaboration) is also quite valuable.

So I'd suggest someone interested in setting up that sort of crowdsourcing or coordination system might want to reach out to EdoArad, Peter Slattery, and/or David Janku. The first two of those people commented on my central directory for open research questions, and David is involved with (runs?) Effective Thesis. All seem to know more than me about this sort of thing. And it might even make sense to somehow combine any new attempts at voluntary research crowdsourcing or collaborations with initiatives they've already set up.

Good list! For next steps, I'd like to see one-pager research proposals, detailing gaps in the literature and the value-added of new work.

Yeah, that seems a good next step to me too. 

Another potentially good next step: Just collect relevant sources (as I've done for various topics here), or go a little further by making annotated bibliographies (as Vaidehi has done for EA analysess of social movements). One could limit this to just sources by historians, or could include any sources that seem relevant

And another potentially good next step would be to write some sort of literature review, or a collection of semi-polished notes. It might make sense to start small, just reading a few papers, not writing that many pages, and not worrying too much about polish. This could help highlight relevant sources and draw some tentative conclusions about how valuable further work would be. But this also might be an ok end-goal for many topics, as often we might be fine with just summaries of the work that exists and its implications for EA, rather than "original research".

I'd encourage anyone interested in these topics to consider taking any of those next steps (including writing research proposals)!

Also, regarding the research proposals idea: I happen to have already essentially written research proposals regarding collapse & recovery, moral circle expansion, and sort-of some other topics. These range in length from a few paragraphs to a few pages. They've been written with job and grant applications in mind, so I haven't posted them to the forum yet. But I plan to post them in the coming weeks/months, and I'm happy to share them as they stand with anyone who's interested. (I have lots of ideas, so if anyone is really excited to pursue something like one of my ideas, I'm happy for them to take that one. Also happy to collaborate, get feedback, etc.)

(Typically I'm not thinking of history alone, but rather some blend of history, psychology, political science, and often other disciplines. But history tends to be in the mix.)

Great post!

I would love to see a study on the history of wellbeing and suffering. This is perhaps more challenging as our understanding of how people suffered in the past is (arguably) poorly understood (as is our understanding of exactly how/why people suffer today!). But a first order approximation could look at generic factors that we expect to correlate with wellbeing, such as the proportion of people living in slavery or servitude; the personal freedoms people had; the levels of violence; and so on. Then a more detailed study -- which would probably require expertise beyond history -- would be to look at more direct (but harder to find historically) indicators of wellbeing such as mental health, self-reported happiness, suicide rates, etc.

Thanks!

Yeah, I share the view that either more research on that topic or a summary of existing for EAs would be valuable. (I imagine a lot of relevant work on that already exists, but I've been wrong about such things before, and in any case it could be good for someone to read it and extra the most EA-relevant insights.)

I think I'd see this as an (important) subset of "1. The history of various types of growth and progress (economic, intellectual, technological, moral, political, etc.)". Would you agree?

(That wouldn't negate the value of your comment - many of the topics I listed are very broad, and this post becomes more useful to people if commenters break them down into more specific topics, suggest ways they could be investigated, etc.)

I agree that the "first approximation" I mentioned -- looking at generic factors that we expect to correlate with wellbeing, such as slavery or servitude, personal freedoms, violence -- would be a subset of "1. The history of various types of growth and progress...".

But I feel like a more detailed investigation of wellbeing/suffering through history lies outside of "1. The history of various types of growth and progress...". I say this because what we call "progress" does not necessarily correlate with wellbeing/suffering. And I think this *might* lead charities and movements such as EA to potentially overestimate the effects of intuitively useful interventions. I should add that this is potentially speculative and controversial! But I feel that there are important questions that haven't been fully tackled: Does growth really improve wellbeing? Does increasing life expectancy really reduce suffering, or does it make people overly sensitive to death? Were people in previous centuries -- where violence and disease were high -- as unhappy as we'd expect if we just look at these factors? Or are there more subtle factors that affect happiness?

Sometimes I feel like "progress" is about "satisfying people's stated preferences" rather than "making people happy". And what we think we want isn't always what makes us happy!

So rather than looking directly at violence, growth, death rates, etc, (which I expect has been done many times), I'd like to see a detailed study that looks at more direct indicators of wellbeing such as mental health, self-reported happiness, suicide rates -- and many more. And then a comparison between this and the usual "progress" studies. Perhaps this has also been done though and I've missed it.

Anyway I'd be very interested to hear what you think as I've not properly discussed these ideas before!

I have a draft, which I'll hopefully publish in the coming weeks/months, on "Will humanity achieve its full potential, as long as existential catastrophe is prevented?"

I think an argument in favour of "Yes" is that it might be highly likely that, if we don’t suffer an existential catastrophe, there will be positive trends across the long-term future in all key domains. And I think that that argument could in turn be supported by the argument that such trends have been the norm historically, or that human agency will ensure such positive trends.

 So I thought a bit about how true that seems to be. I'll quote the relevant part of the draft, as it seems somewhat relevant here. (Note that I'm not an expert, and barely even did any googling; this was based on intuitions and what I happened to already know/believe.)

---

  • I believe there’s strong evidence that there have been positive trends in many domains in many periods and places before the Industrial Revolution. Relevant domains may include violence levels, the size of people’s moral circles, and use of reason and scientific thinking.
    • See e.g. The Better Angels of Our Nature.
  • I believe there’s some evidence that this represents a fairly widespread pattern. But I’m less certain of that. And there’ve definitely been “negative” trends in certain domains, times, and places (e.g., [insert example here; I have some ideas but should Google them]).
  • I believe there’s strong evidence that, since sometime around the Industrial Revolution, there have been positive trends across most of the world and in most domains that matter.
  • But even since the Industrial Revolution, there have been at least some negative trends or stagnation in some domains, times, and places. And these might include some of the “most important” domains, times, and places in relation to evaluating the FINE hypothesis.
    • Here are some plausibly important domains where I think there’s at least some evidence of negative trends recently in the developed world:
      • Human-caused animal suffering (especially on factory farms)
      • Political discourse
      • Political polarisation
      • Respect for science, scientists, and/or truth
      • Mental health [maybe also suicide rates? should google this]
      • Drug abuse
      • Incarceration rates (perhaps especially or only in the US)
      • Economic inequality
    • There were also some negative trends in particular domains, times, and places that were later reversed, but seem like they plausibly could’ve become quite lastingly bad. E.g., various trends in Germany and Russia leading up to and during WWII.
    • And there are plausibly important domains for which I’m not aware of evidence of substantial progress recently (e.g., democratisation in China).

Overall, I think historical trends are more consistent than inconsistent with the [argument that, if we don’t suffer an existential catastrophe, there will be positive trends across the long-term future in all key domains]. But that the matter isn’t totally clear-cut, and would likely benefit from much more detailed analysis.

"Will humanity achieve its full potential, as long as existential catastrophe is prevented?"
I think an argument in favour of "Yes" is that it might be highly likely that, if we don’t suffer an existential catastrophe, there will be positive trends across the long-term future in all key domains.

That there will be positive trends doesn't necessarily entail that humanity (or some other entities) will achieve its full potential, however. It's possible that the future will be better than the present, without humanity achieving its full potential. And the value difference between such a future and a future where humanity achieves its full potential may be vast.

I agree that there is an historical argument for positive future trends, but it seems that one needs additional steps to conclude that humanity will achieve its full potential.

Yeah, I definitely agree. This was part of my motivation for writing that draft. (Also, even if just "positive trends" was enough - which I agree that it isn't - finding that were positive trends in the past doesn't guarantee there will be positive trends in the future.)

More broadly, my impression is that some EAs are very confident the answer to the titular question is "Yes", and I feel like I haven't seen very strong arguments for such high confidence.

The draft is not necessarily arguing in favour of "Yes" (or "No") overall; it's primarily intended to highlight the question and stimulate and scaffold discussion.

(Happy to share the draft, if you or others are interested.)

Thanks, yes I'd be interested.

Ok, I've sent you a message :)

I think these are good points.

Stepping back first: I'm quite morally uncertain, but the moral theory I have the highest degree of belief in is "something like classical hedonistic utilitarianism, with a moral circle that includes basically sentient beings, across any point in time". (My moral circle therefore may or may not include mammals, insects, digital minds, etc., depending on whether they "empirically turn out to be sentient" - though it's quite unclear what that means. For expected value reasons, concerns about digital minds, insects, etc. play a substantial role in my priorities.)

The classical hedonistic utilitarianism bit (setting aside the moral circles bit) makes me very strongly inclined to agree that: 

  1. what really matters is how (human) wellbeing has changed over time, and 
  2. that it's unfortunate that discussion/studies of "growth" and "progress" often focus on things that may not be strongly correlated with (human) wellbeing. 

I'd say the focus is, as you suggest, often on "satisfying people's stated preferences". But I'd even go further and say that it's often on one of the following things:

  • what the person in this discussion or doing this study thinks is a typical or ideal preference
  • what that person themselves thinks is terminally valuable (regardless of preferences)
  • whatever is easiest to measure/discuss and seems plausibly related somehow to wellbeing, preferences, or valuable things

Two EAs who've done what seems to me good work in relation to subjective wellbeing, its measurement, and its correlation with other things are Michael Plant and Derek Foster. (Though I don't think they focused much on history.) 

...but then there's the moral circles bit. This makes me think that (a) human wellbeing is unlikely to be a dominating concern, and (b) wellbeing at the moment or so far is unlikely to be a dominating concern. 

So I care about present-day human wellbeing primarily to the extent that it correlates with across-all-time, across-all-sentient-life wellbeing. And this means that, for instrumental reasons, I probably actually should pay more attention to other proxies, like GDP or technological developments, than to wellbeing. (This doesn't mean it's clear to me that GDP growth or technological developments tend to be good, but that they're likely important, for good or ill. See differential progress.)

So, in contrast to what I might have said a few years ago when my moral circle hadn't expanded to consider nonhumans and future beings more, I wouldn't personally be extremely excited about historical analysis of changes in human wellbeing over time, and what affected those changes. But:

  • I think that'd be quite exciting from a human-centric, non-longtermist perspective
  • I think it's still net-positive, and maybe quite positive, from my perspective, because understanding this may help us make various predictions about important aspects of the future, and work out how we should intervene
    • I'll sort-of elaborate on this in a separate comment

You've made some really good points here and I agree with most of it! And we're on the same page in terms of "hedonistic utilitarianism, with a moral circle that includes basically all sentient beings, across any point in time".

I guess my main motivation for wanting to see a historical study of well-being is because I feel that, to fully understand what makes humans happy, it is valuable to consider a wide range of possible human life experiences. Studying history does this: we can consider a wide range of societies, lifestyles, circumstances etc, and ask which humans were happy and which were suffering. And comparing this to standard "progress" measures such as violence and life expectancy can help us understand whether interventions to improve such measures are the best we can do. Then this can help us design and implement future strategies to improve well-being moving forward.

Thank you for writing this up!

Ben Garfinkel writes:

I would be interested in an investigation into the history of existential risk concerns around nanotechnology and the lessons it might hold for the modern AI risk community.

The relevant section of that doc is quite interesting, and I recommend reading it. I raise this here since it's somewhat relevant to "6. The history of predictions (especially long-range predictions and predictions of things like extinction), millenarianism, and how often people have been right vs wrong about these and other things".

A friend of mine said, essentially, that:

  • A lot of the topics in this post seem like just "the history of EA-related idea X"
  • For some (but not all) of these topics, my friend doesn't really see a clear path to impact, and they think one would need to flesh out the case for why the history of X is particularly important to understand

I think those are basically fair points, but I'm fairly excited about research into these topics despite them. Here's the response I wrote to that friend of mine, which might be useful to other people who are trying to think about the value of EA-aligned history research (whether or not on these topics).

Yeah, I'd agree that my post doesn't explicitly outline paths to impact - or at least not very concrete ones - and that fleshing out and critiquing potential paths to impact would be a logical and useful early step. (The post was meant mainly as a starting point.)

But I'd be surprised if quality research into each of those topics wouldn't turn out to be at least fairly useful. (But that sentence could be said about way more things that EAs have time to research, so fleshing out the paths to impact would still be useful for prioritisation, as well as for crafting more specific research directions, making dissemination plans, etc. See also.)

The reason I'd be surprised is partly because of roughly the following generic argument:

"Understanding the history of a topic often seems to help in: 

  • Predicting what will happen in future in relation to that topic
  • Thinking about what one could/should do to intervene in that (including noticing common mistakes/pitfalls/downside risks)
  • Thinking about what to do in relation to other topics that might be affected by this topic (e.g., maybe understanding things to do with AI, bio, and nuclear risks should influence which countries we prioritise engagement with or movement-building in or how we do that)

And it seems reasonable to assume that that'll be true for a given topic unless one has reason to believe otherwise.

So if a topic seems potentially quite relevant to efforts to improve the expected value of the long-term future, then understanding the history of it better will probably be useful."

(But there are definitely more than 10 topics that fit that description, so it could definitely be useful to create a longlist of a broader set of topics that seem to fit that description, sketch potential paths to impact for research on them, and get a rough sense of which ones should be highest priority. I'd guess that the "ideal top 10" would differ at least somewhat from what's in this post.)

Update: I've now created the Facebook group History and Effective Altruism, to hopefully serve as one home for people interested in these and other topics at the intersection of EA and history. 

This was prompted by: 

  • me having become even more convinced over the last month of the value of historical research
  • this post being featured in the EA newsletter, alongside a call for historically inclined people to engage on the forum

I'd encourage anyone interested to join that group!

Curated and popular this week
Relevant opportunities