I created a list of researchers in the existential risk field based on the amount of papers they have published. In the following I provide links to their work. This is not meant as a strict evaluation of the value of their contributions to the field, but more as a quick overview of who is working on what. I hope this is helpful for people who are interested in existential risk studies, be it professionally or on a personal basis to understand the main topics that are currently being worked on. I think this could be especially helpful for people coming new to existential risk research and want to understand who the established organizations, researchers and topics are. 

This is based on The Existential Risk Research Assessment (TERRA). TERRA is a website that uses machine learning to find publications that are related to existential risk. It is run by Centre for the Study of Existential Risk (CSER) and was originally launched by Gorm Shackelford. I used their curated list of papers and wrote some code that counted the amount of papers per author. You can find the code here and here are the results:

I am sorry for butchering some names, but this was due to the way I had to strip the strings to make them easily countable. As TERRA is based on the manual assessment of automatically collected papers, this list is likely incomplete, but I still think that it gives us a good overview of what is going on in existential risk studies. If you want to improve the data here, feel free to make an account at TERRA and start assessing papers. 

In the following I curated a list of all the top 25 researchers with links to their Google Scholar profile (if I could find it), the main existential risk organization they are affiliated with and a publication where I think that it showcases the kind of existential risk research they do. If you have the impression that some of the people in the list could be better represented by another publication, please let me know and I will change it. 25 is an arbitrary cutoff, this does not mean that the person on 26 is any worse than the person on 25, but I had to stop somewhere. You can find the complete list in the repository

Here is the list with the links: 

  1. Seth Baum
    1. Global Catastrophic Risk Institute (GCRI)
    2. How long until human-level AI? Results from an expert assessment
  2. David Denkenberger
    1. Alliance to Feed the Earth in Disasters (ALLFED)
    2. Feeding everyone: Solving the food crisis in event of global catastrophes that kill crops or obscure the sun
  3. Joshua M. Pearce
    1. Alliance to Feed the Earth in Disasters (ALLFED)
    2. Leveraging Intellectual Property to Prevent Nuclear War
  4. Nick Bostrom
    1. Future of Humanity Institute (FHI)
    2. Superintelligence: Paths, Dangers, Strategies
  5. Roman V. Yampolskiy
    1. University of Louisville
    2. Predicting future AI failures from historic examples
  6. Émile P. Torres
    1. Currently no affiliation, former Centre for the Study of Existential Risk (CSER)
    2. Who would destroy the world? Omnicidal agents and related phenomena
  7. Milan M. Ćirković
    1. Astronomical Observatory of Belgrade
    2. The Temporal Aspect of the Drake Equation and SETI
  8. Bruce Edward Tonn
    1. University of Tennessee
    2. Obligations to future generations and acceptable risks of human extinction
  9. Alan Robock
    1. Rutgers University
    2. Volcanic eruptions and climate
  10. Owen Toon
    1. University of Colorado, Boulder
    2. Environmental perturbations caused by the impacts of asteroids and comets
  11. Jacob Haqq-Misra
    1. Blue Marble Space Institute of Science
    2. The Sustainability Solution to the Fermi Paradox
  12. Luke Kemp
    1. Centre for the Study of Existential Risk (CSER)
    2. Climate Endgame: Exploring catastrophic climate change scenarios
  13. Anders Sandberg
    1. Future of Humanity Institute (FHI)
    2. Converging Cognitive Enhancements
  14. Alexey Turchin
    1. Alliance to Feed the Earth in Disasters (ALLFED)
    2. Classification of global catastrophic risks connected with artificial intelligence
  15. Charles Bardeen
    1. National Center for Atmospheric Research
    2. Extreme Ozone Loss Following Nuclear War Results in Enhanced Surface Ultraviolet Radiation
  16. John Leslie
    1. University of Guelph
    2. Testing the Doomsday Argument
  17. Graciela Chichilnisky
    1. Columbia University
    2. The foundations of probability with black swans
  18. Stuart Armstrong
    1. Future of Humanity Institute (FHI)
    2. Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox
  19. Paul R. Ehrlich,
    1. Stanford University
    2. Extinction: The Causes and Consequences of the Disappearance of Species
  20. Hin-Yan Liu
    1. University of Copenhagen
    2. Categorization and legality of autonomous and remote weapons systems
  21. Juan B. García Martínez
    1. Alliance to Feed the Earth in Disasters (ALLFED)
    2. Potential of microbial protein from hydrogen for preventing mass starvation in catastrophic scenarios
  22. David Morrison
    1. Ames Research Center
    2. Asteroid and comet impacts: the ultimate environmental catastrophe
  23. R. Grieve
    1. University of Western Ontario
    2. Extraterrestrial impacts on earth: the evidence and the consequences
  24. Richard S.J. Tol
    1. University of Sussex
    2. Why Worry About Climate Change? A Research Agenda
  25. Olle Häggström
    1. Chalmers University of Technology
    2. Artificial General Intelligence and the Common Sense Argument

To also have an overview of the overall output of existential risk organization I also added up the publications for all researchers that are at the same organization. This includes double counts, but I still think it is a good way to give a rough approximation of the overall productivity of the organization: 

Sharing my personal reflections while categorizing research on TERRA. It appears that the surge in COVID-19 related papers has started to dwindle, seeming to decrease in 2022 compared to 2021. On the other hand, the interest in AI seems to grow. In 2022, there was a noticeable uptick in AI-related papers compared to previous years. It seemed to me that AI-related papers might even constitute around 50% of the papers I categorized as having existential risk potential in 2022, although this is a rough estimate as I lack access to categorized data.

Turning to a particular concern regarding a lack of representation of women (1 out of 25) in the list. It's unclear what precisely underlies this issue and how we can address it, but it's undoubtedly a problem. We need to improve this situation. The community should take steps to make it more welcoming and supportive for women to participate, persist, and excel. For instance, initiatives like mentoring programs specifically aimed at women in the context of existential risks could be a good option here.

I plan to revisit this project in a year's time and provide an update if the new data shows a significant shift from the one shown here.

34

0
0

Reactions

0
0

More posts like this

Comments21
Sorted by Click to highlight new comments since: Today at 5:23 PM

I appreciate the effort, but as someone who has attempted similar analysis in the past I think this sort of methodology is just very hard to extract useful information from. I think you are mainly loading on essentially random facts about how research is formatted and tagged rather than the underlying reality.

As such I think you basically can't really draw much in the way of conclusions from this data. In particular, you definitely cannot infer that the University of Louisville is the fifth most productive existential risk organization. Nor do I think you can infer much about sex; the exclusion of women like Ajeya, whose contributions are definitely more significant than many included in the list, is due to flaws in the data, not social dynamics.

"can't really draw much in the way of conclusions from this data" seems like a really strong claim to me. I would surely agree that this does not tell you everything there is to know about existential risk research and it especially does not tell you anything about x-risk research outside classic academia (like much of the work by Ajeya). 

But it is based on the classifications of a lot of people on what they think is part of the field of existential risk studies and therefore I think gives a good proxy on what people in the field think what is part of their field. Also, this is not meant to tell you that this is the ultimate list, but as stated in the beginning of the post, as a way to give people an overview of what is going on. 

Finally, I think that this surely tells you something about the participation of women in the field. 1 out of 25 is really, really unlikely to happen by chance. 

Finally, I think that this surely tells you something about the participation of women in the field.

It presumably tells you something about the participation of women in the field, but it's not clear exactly what. For instance, my honest reaction to this list is that several of the people on it have a habit of churning out lots of papers of mediocre quality – it could be that this trait is more common among men in the field than among women in the field.

This is just another data point that the existential risk field (like most EA adjacent communities) has a problem when it comes to gender representation. It fits really well with other evidence we have. See, for example Gideon's comment under this post here: https://forum.effectivealtruism.org/posts/QA9qefK7CbzBfRczY/the-25-researchers-who-have-published-the-largest-number-of?commentId=vt36xGasCctMecwgi

While on the other hand there seems to be no evidence for your "men just publish more, but worse papers" hypothesis. 

Well, lets have a lot at some data that would include Ajeya. If I go to the OpenPhil website and look at people on the 'our team' page associated with either AI or Biosecurity, then out of the 11 people I counted, 1 is a woman (this is based off a quick count, so may be wrong) (if I count the EA Community Growth (Longtermism) people then this ratio is slightly better, but my impression is the work of this team is slightly further from research into XRisk, although I may be wrong). 

If I look at Rethink Priorities, their AI Governance team has 3 women out of 13 people, whilst their existential security team is 1/5. 

For FHI, of the 31 people listed as part of their team on the website, 7 out of 31 are women. If I only include research staff (ie remove DPhil students and affiliates), then 2/12 are women.

For CSER, of the 35 current full time staff (note this includes administrative staff), 12 are women. Of research staff, 5/28 are women. If I include the alumni listed as well (and only include research staff) then 15/44 are women. 

So according to these calculations, 9% of OpenPhil, 22% of RP, 16.6% of FHI, and 17% of CSER are women. 

This obviously doesn't look at seniority (Florian's analysis may actually be better for this), although I think is pretty indicative that there is a serious problem

FWIW I think your analysis is more representative than FJehn's. 10-20% (or maybe very slightly higher) seems more accurate to me than 4%, if (eg) I think about the people I'm likely to have technical discussions with or cite results from. Obviously this is far from parity (and also worse than other technical employers like NASA or Google), but 17% (say) is meaningfully different from 4%.

I'm honestly rather confused with how people can disagree vote with this. I'd I get these stats wrong?

I assume "indicative of a serious problem" is what they're disagreeing with.

In my personal experience you always get downvotes/disagree votes for even mentioning any problems with gender balance/representation in EA, no matter what your actual point is. 

I agree with this.

"Number of publications" and "Impact per publication" are separate axes, and leaving the latter out produces a poorer landscape of X-risk research. 

Yes, especially given that impact of x-risk research is (very) heavy-tailed.

I would have liked this article much more if the title had been “The 25 researchers who have published the largest number of academic articles on existential risk”, or something like that.

The current title (“The top 25 existential risk researchers based on publication count”) seems to insinuate that this criterion is reasonable in the context of figuring out who are the “Top 25 existential risk researchers” full stop, which it’s not, for reasons pointed out in other comments.

Good point. Changed the title accordingly. 

This is just counting the number of published papers, and doesn't consider the influence (such as citation count or h-index), right?

(Even if that is true, I still found it interesting to see this and I'm glad that you shared it.)

Exactly, this only counts the number. 

Thanks for the kind words (sometimes feels like those are hard to come by in the forum). 

I'm really confused by your code and the results given I did a hand count of both Bostrom and Torres' papers on google scholar and their websites and your count seems off. Bostrom definitely has more than 15 papers and Torres definitely has less than 12. 

Also it seems like you excluded object level papers? I'm confused why I'm not seeing Whittlestone here. I don't think people should take this as good data.

What exactly confused you about the code? It only strips down the names and counts them. 

That the publications by someone are under counted makes sense, given how TERRA works, as likely not all publications are captured in the first place and probably not all publications were considered existential risk relevant. When I look at Bostrom's papers I see several that I would not count as directly x-risk relevant. 

Where exactly did you find the number for Torres? On their own website (https://www.xriskology.com/scholarlystuff) they have listed 15 papers and the list only goes to 2020. Since then Torres published several more papers, so this checks out. 

I personally did not exclude any papers. I simply used the existing TERRA database. Interestingly, the database only contains one paper by Whittlestone. Seems like the current key words used by TERRA did not catch Whittlestone's work. So, yes this is an undercount. 

Are these only peer-reviewed papers published in journals, or also i.e. ones posted on Arxiv?

From what I have seen on TERRA I think this is almost all peer reviewed, but from time to time a preprint, non peer reviewed book or similar things slip in. 

TERRA is based on Scopus

Interesting post, Florian!

I think a better metric for the "academic productivity of org. X" would be "academic papers mainly affiliated with org. X (this can be operationalised e.g. as the main author being mainly affiliated with org. X)"/"number of authors of the academic paper i mainly affiliated with org. X":

  • I think productivity usually refers to "output"/"input". So, if you like, you can divide your metric by "number of authors mainly affiliated with org. X".
  • It seems fairer if only papers mainly affiliated with org. X contribute to the productivity of org. X. For example, Alexey is affiliated with ALLFED, but his 6 papers have not much to do with ALLFED's mission.
  • This metric avoids double-counting papers.

(Ideally, the denominator would be "number of hours invested in the academic paper i mainly affiliated with org. X", but this is not available!)

Yeah good point. I'll probably do it differently if I revisit this next year. 

More from FJehn
Curated and popular this week
Relevant opportunities