The map of x-risk-preventing organizations, people and internet resources

 

Three known attempts to make a map of x-risks prevention in the field of science exist.

First is the list from the Global Catastrophic Risks Institute in 2012-2013, and many links there are  already not working:

http://gcrinstitute.org/organization-directory/

 

The second was done by S. Armstrong in 2014:

http://lesswrong.com/lw/k81/organisations_working_on_multiple_global/

 

 

And the most beautiful and useful map was created by Andrew Critch: 

http://acritch.com/x-risk-2015/ But its ecosystem ignores organizations which have a different view of the nature of global risks (that is, they share the value of x-risks prevention, but have another world view). 

 

In my map I have tried to add all currently active organizations which share the value of global risks prevention.

 

It also regards some active independent people as organizations, if they have an important blog or field of research, but not all people are mentioned in the map. If you think that you (or someone) should be in it, please write to me at alexei.turchin@gmail.com

I used only open sources and public statements to learn about people and organizations, so I can’t provide information on the underlying net of relations.

 

I tried to give all organizations a short description based on its public statement and also my opinion about its activity. 

 

In general it seems that all small organizations are focused on their collaboration with larger ones, that is MIRI and FHI, and small organizations tend to ignore each other; this is easily explainable from the social singnaling theory. Another explanation is that larger organizations have a great ability to make contacts.

 

It also appears that there are several organizations with similar goal statements. 

 

It looks like the most cooperation exists in the field of AI safety, but most of the structure of this cooperation is not visible to the external viewer, in contrast to Wikipedia, where contributions of all individuals are visible. 

 

It seems that the community in general lacks three things: a united internet forum for public discussion, an x-risks wikipedia and an x-risks related scientific journal.

 

Ideally, a forum should be used to brainstorm ideas, a scientific journal to publish the best ideas, peer review them and present them to the outer scientific community, and a wiki to collect results.

 

Currently it seems more like each organization is interested in creating its own research and hoping that someone will read it. Each small organization seems to want to be the only one to present the solutions to global problems and gain full attention from the UN and governments. It raises the problem of noise and rivalry; and also raises the problem of possible incompatible solutions, especially in AI safety.

The pdf is here: http://immortality-roadmap.com/riskorg5.pdf

 

9

0
0

Reactions

0
0
Comments7
Sorted by Click to highlight new comments since:

Some minor comments:

Global Priorities Project has merged into Centre for Effective Altruism (see here ). We are continuing to do research on questions related to existential risk, though we are not currently planning to write new reports on the topic like the report mentioned above.

Leverage Research, not Leveraged Research Toby Ord, not Tobi Ord 80'000 hours, not 80 000

though we are not currently planning to write new reports on the topic like the report mentioned above.

Although note that we have a significant one still to be released, that we are just polishing!

Thanks, I will immediately update.

[anonymous]2
0
0

In general, I think this is a great map! Some more minor typos to fix:

Milan Cirkovic (or Ćirković), not Milan Circovic

Norwegian, not Norvegian

Holocene Impact Working Group, not Holocen

Dennis Meadows, not Dennis Medows

Carl Shulman, not Karl Shulman

In the reddit box: existentialrisk, not existentiarisk

I'm probably missing a few but those were the ones that stood out to me. I'd also suggest changing the capitalization of organization names, e.g. "Leverage Research" instead of "Leverage research", "Future of Humanity Institute" instead of "Future of humanity institute", etc. It's not a big deal but I think it would be an improvement.

Thanks for this great map!

A minor detail: It's a bit inaccurate to say that the Foundational Research Institute works on general x-risks. This text explains that FRI focuses on reducing risks of astronomical suffering, which is related to, but not the same, as x-risk reduction.

Curated and popular this week
Relevant opportunities