Need help planning your career? Probably Good’s 1-1 advising service is back! After refining our approach and expanding our capacity, we’re excited to once again offer personal advising sessions to help people figure out how to build careers that are good for them and for the world.
Our advising is open to people at all career stages who want to have a positive impact across a range of cause areas—whether you're early in your career, looking to make a transition, or facing uncertainty about your next steps. Some applicants come in with specific plans they want feedback on, while others are just beginning to explore what impactful careers could look like for them. Either way, we aim to provide useful guidance tailored to your situation.
Learn more about our advising program and apply here.
Also, if you know someone who might benefit from an advising call, we’d really appreciate you passing this along. Looking forward to hearing from those interested. Feel free to get in touch if you have any questions.
Finally, we wanted to say a big thank you to 80,000 Hours for their help! The input that they gave us, both now and earlier in the process, was instrumental in shaping what our advising program will look like, and we really appreciate their support.
At the start of Chapter 6 in the precipice, Ord writes:
This made me recall hearing about Matsés, a language spoken by an indigenous tribe in the Peruvian Amazon, that has the (apparently) unusual feature of using verb conjugations to indicate the certainty of information being provided in a sentence. From an article on Nautilus:
I doubt the Matsés spend much time talking about existential risk, but their language could provide an interesting example of how to more effectively convey aspects of certainty, probability and evidence in natural language.
According to Fleck's thesis, Matsés has nine past tense conjugations, each of which express the source of information (direct experience, inference, or conjecture) as well as how far in the past it was (recent past, distant past, or remote past). Hearsay and history/mythology are also marked in a distinctive way. For expressing certainty, Matsés has a particle ada/-da and a verb suffix -chit which mean something like "perhaps" and another particle, ba, that means something like "I doubt that..." Unfortunately for us, this doesn't seem more expressive than what English speakers typically say. I've only read a small fraction of Fleck's 1279-page thesis so it's possible that I missed something. I wrote a lengthier description of the evidential and epistemic modality system in Matsés at https://forum.effectivealtruism.org/posts/MYCbguxHAZkNGtG2B/matses-are-languages-providing-epistemic-certainty-of?commentId=yYtEWoHQEFuWCehWt.
Participants in the 2008 FHI Global Catastrophic Risk conference estimated a probability of extinction from nano-technology at 5.5% (weapons + accident) and non-nuclear wars at 3% (all wars - nuclear wars) (the values are on the GCR wikipedia page). In the Precipice, Ord estimated the existential risk of Other anthropogenic risks (noted in the text as including but not limited to nano-technology, and I interpret this as including non-nuclear wars) as 2% (1 in 50). (Note that by definition, extinction risk is a sub-set of existential risk.)
Since starting to engage with EA in 2018 I have seen very little discussion about nano-technology or non-nuclear warfare as existential risks, yet it seems that in 2008 these were considered risks on-par with top longtermist cause areas today (nanotechnology weapons and AGI extinction risks were both estimated at 5%). I realize that Ord's risk estimates are his own while the 2008 data is from a survey, but I assume that his views broadly represent those of his colleagues at FHI and others the GCR community.
My open question is: what new information or discussion over the last decade lead the GCR to reduce their estimate of the risks posed by (primarily) nano-technology and also conventional warfare?
I too find this an interesting topic. More specifically, I wonder why I've seen as little discussion published in the last few years (rather than from >10 years ago) of nanotech as I have. I also wonder about the limited discussion of things like very long-lasting totalitarianism - though there I don't have reason to believe people recently had reasonably high x-risk estimates; I just sort-of feel like I haven't yet seen good reason to deprioritise investigating that possible risk. (I'm not saying that there should be more discussion of these topics, and that there are no good reasons for the lack of it, just that I wonder about that.)
I'm not sure that's a safe assumption. The 2008 survey you're discussing seems to have itself involved widely differing views (see the graphs on the last pages). And more generally, the existential risk and GCR research community seems to have widely differing views on risk estimates (see a collection of side-by-side estimates here).
I would also guess that each individual's estimates might themselves be relatively unstable from one time you ask them to another, or one particular phrasing of the question to another.
Relatedly, I'm not sure how decision-relevant differences of less than an order of magnitude between different estimates are. (Though such differences could sometimes be decision-relevant, and larger differences more easily could be.)
In case you hadn't seen it: 80,000 Hours recently released a post with a brief discussion of the problem area of atomically precise manufacturing. That also has links to a few relevant sources.
Thanks Michael, I had seen that but hadn't looked at the links. Some comments:
The cause report from OPP makes the distinction between molecular nanotechnology and atomically precise manufacturing. The 2008 survey seemed to be explicitly considering weaponised molecular nanotechnology as an extinction risk (I assume the nanotechnology accident was referring to molecular nanotechnology as well). While there seems to be agreement that molecular nanotechnology could be a direct path to GCR/extinction, OPP presents atomically precise manufacturing as being more of an indirect risk, such as through facilitating weapons proliferation. The Grey goo section of the report does resolve my question about why the community isn't talking about (molecular) nanotechnology as an existential risk as much now (the footnotes are worth reading for more details):
OPP's discussion on why molecular nanotechnology (and cryonics) failed to develop as scientific fields is also interesting:
It least in the case of molecular nanotechnology, the simple failure of the field to develop may have been lucky (at least from a GCR reduction perspective) as it seems that the research that was (at the time) most likely to lead to the risky outcomes was simply never pursued.
Update: Probably influenced a bit by this discussion, I've now made a tag for posts about Atomically Precise Manufacturing, as well as a link post (with commentary) for that Open Phil report.
I was recently reading the book Subvert! by Daniel Cleather (a colleague) and thought that this quote from Karl Popper and the author's preceding description of Popper's position sounded very similar to EAs method of cause prioritisation and theory of change in the world. (Although I believe Popper is writing in the context of fighting against threats to democracy rather than threats to well-being, humanity, etc.) I haven't read The Open Society and Its Enemies (or any of Popper's books for that matter), but I'm now quite interested to see if he draws any other parallels to EA.
I also quite enjoyed Subvert! And would recommend that as a fresh perspective on the philosophy of science. A key point from the book is: