An update here: This COVID-19 forward triage tool now also allows anyone to get a doctor to look at their particular case for an extremely low fee ($12 USD - though free service is currently available if needed).
Thanks for this piece, I thought it was interesting!
A small error I noticed while reading through one of the references is that the line "For example, France’s GDP per capita is around 60% of US GDP per capita." is incorrectly summarizing the cited material. The value needs to be 67% to make this sentence correct. The relevant section in the underlying material is: "As an example, suppose we wish to compare living standards in France and the United States. GDP per person is markedly lower in France: France had a per capita GDP in 2005 of just 67 percent of the U.S. value. Consumption per person in France was even lower — only 60 percent of the U.S., even adding government consumption to private consumption."
I believe that regional talent pools could also be another factor in favor of the multiple organization scenario. For example, something I think a lot about is how the USA could really use an institution like the Future of Humanity Institute (FHI) in the long run. In addition to all of the points made in the original post, I think that such an institution would improve the overall health of the ecosystem of "FHI-like research" by drawing on a talent pool that is at least somewhat non-overlapping with that drawn upon by FHI.
I think that the talent pools are at least somewhat distinct because a) crossing borders is often logistically challenging or impossible, depending on the scenario; and b) not all job candidates can relocate to the United Kingdom for a variety of personal reasons.
If anyone is interested in discussing a "FHI-like institution in the USA" further, please get in touch with me either via direct message or via ben.harack at visionofearth.org.
This line of inquiry (that rebuilding after wars is quite different from other periods of time) is explored in G. John Ikenberry's After Victory: Institutions, Strategic Restraint, and the Rebuilding of Order After Major Wars. A quick and entertaining summary of the book - and how it has held up since its publication - was written by Ikenberry in 2018: Reflections on After Victory.
While I'm sympathetic to this view (since I held it for much of my life), I have also learned that there are very significant risks to developing this capacity naively.
To my knowledge, one of the first people to talk publicly about this was Carl Sagan, who discussed this in his television show Cosmos (1980), and in these publications:
Harris, A., Canavan, G., Sagan, C. and Ostro, S., 1994. The Deflection Dilemma: Use Vs. Misuse of Technologies for Avoiding Interplanetary Collision Hazards.
Sagan, C. and Ostro, S.J., 1994. Dangers of asteroid deflection. Nature, 368(6471), p.501.
Sagan, C., 1992. Between enemies. Bulletin of the Atomic Scientists, 48(4), p.24.
Sagan, C. and Ostro, S.J., 1994. Long-range consequences of interplanetary collisions. Issues in Science and Technology, 10(4), pp.67-72.
Two interesting quotes from the last one:
More recently, my collaborator Kyle Laskowski and I have reviewed the relevant technologies (and likely incentives) and have come to a somewhat similar position, which I would summarize as: the advent of asteroid manipulation technologies exposes humanity to catastrophic risk; if left ungoverned, these technologies would open the door to existential risk. If governed, this risk can be reduced to essentially zero. (However, other approaches, such as differential technological development and differential engineering projects do not seem capable of entirely closing off this risk. Governance seems to be crucial.)
So, we presented a poster at EAG 2019 SF: Governing the Emerging Risk Posed By Asteroid Manipulation Technologies where we summarized these ideas. We're currently expanding this into a paper. If anyone is keenly interested in this topic, reach out to us (contact info is on poster).
Epistemic status: I don't have a citation handy for the following arguments, so any reader should consider them merely the embedded beliefs of someone who has spent a significant amount of time studying the solar system and the risks of asteroids.
No, I believe that dark Damocloids will be largely invisible (when they are far away from the sun) even to the new round of telescopes that are being deployed for surveying asteroids. They're very dark and (typically) very far away.
Luckily, I think the consensus is that they're only a small portion of the risk. Most of the risk comes from the near-Earth asteroids (NEAs), since due to orbital mechanics they have many opportunities (~1 per year or so) to strike the Earth, while comets fly through the inner solar system extremely rarely. Thus, as we've moved towards finding all of the really big NEAs, we've moved very significantly towards knowing about the vast majority of the possible "civilization ending" or "mass extinction" events in our near future. There will still be a (very) long tail of real risk here due to objects like the Damocloids, but most of the natural risk of asteroids will be addressed if we completely understand the NEAs.
Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.
I agree completely regarding information hazards. We've been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases we're talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, we're in new territory. We've definitely considered keeping our silence on both counts (also see https://forum.effectivealtruism.org/posts/CoXauRRzWxtsjhsj6/terrorism-tylenol-and-dangerous-information if you haven't seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).
If you're interested in the project itself, or in further discussions of these hazards/opportunities, let me know!
Regarding the "arms race" terminology concern, you may be referring to https://www.researchgate.net/publication/330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they aren't the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months.
After reviewing the literature pretty extensively over the last several months for a related project (the risks of human-directed asteroids), it seems to me that there is a strong academic consensus that we've found most of the big ones (though definitely not all - and many people are working hard to create ways for us to find the rest). See this graphic for a good summary of our current status circa 2018: https://www.esa.int/spaceinimages/Images/2018/06/Asteroid_danger_explained
Recently, I've been part of a small team that is working on the risks posed by technologies that allow humans to steer asteroids (opening the possibility of deliberately striking the Earth). We presented some of these results in a poster at EA Global SF 2019.
At the moment, we're expanding this work into a paper. My current position is that this is an interesting and noteworthy technological risk that is (probably) strictly less dangerous/powerful than AI, but working on it can be useful. My reasons include: mitigating a risk that is largely orthogonal to AI is still useful; succeeding at preemptive regulation of a technological risk would improve our ability to do it for more difficult cases (e.g., AI); and popularizing the X-risk concept effectively via a more concrete/non-abstract manifestation than the more abstract risks from technologies like AI/biotech (most people understand the prevailing theory of the extinction of the dinosaurs and can somewhat easily imagine such a disaster in the future).
Factfulness by Hans Rosling is currently my go-to recommendation for the most important single book I could hand to a generic person.
Why do I hold it in such high regard? I think that it does a good job of teaching us both about the world and about ourselves at the same time. It helps the reader achieve better knowledge and better ability to think clearly (and come to accurate beliefs about the world). It's also very hopeful despite its tendency to tackle head-on some of the darker aspects of our world.