This is a special post for quick takes by gavintaylor. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 9:08 AM

At the start of Chapter 6 in the precipice, Ord writes:

To do so, we need to quantify the risks. People are often reluctant to put numbers on catastrophic risks, preferring qualitative language, such as “improbable” or “highly unlikely.” But this brings serious problems that prevent clear communication and understanding. Most importantly, these phrases are extremely ambiguous, triggering different impressions in different readers. For instance, “highly unlikely” is interpreted by some as one in four, but by others as one in 50. So much of one’s work in accurately assessing the size of each risk is thus immediately wasted. Furthermore, the meanings of these phrases shift with the stakes: “highly unlikely” suggests “small enough that we can set it aside,” rather than neutrally referring to a level of probability. This causes problems when talking about high-stakes risks, where even small probabilities can be very important. And finally, numbers are indispensable if we are to reason clearly about the comparative sizes of different risks, or classes of risks.

This made me recall hearing about Matsés, a language spoken by an indigenous tribe in the Peruvian Amazon, that has the (apparently) unusual feature of using verb conjugations to indicate the certainty of information being provided in a sentence. From an article on Nautilus:

In Nuevo San Juan, Peru, the Matsés people speak with what seems to be great care, making sure that every single piece of information they communicate is true as far as they know at the time of speaking. Each uttered sentence follows a different verb form depending on how you know the information you are imparting, and when you last knew it to be true.
...
The language has a huge array of specific terms for information such as facts that have been inferred in the recent and distant past, conjectures about different points in the past, and information that is being recounted as a memory. Linguist David Fleck, at Rice University, wrote his doctoral thesis on the grammar of Matsés. He says that what distinguishes Matsés from other languages that require speakers to give evidence for what they are saying is that Matsés has one set of verb endings for the source of the knowledge and another, separate way of conveying how true, or valid the information is, and how certain they are about it. Interestingly, there is no way of denoting that a piece of information is hearsay, myth, or history. Instead, speakers impart this kind of information as a quote, or else as being information that was inferred within the recent past.

I doubt the Matsés spend much time talking about existential risk, but their language could provide an interesting example of how to more effectively convey aspects of certainty, probability and evidence in natural language.

According to Fleck's thesis, Matsés has nine past tense conjugations, each of which express the source of information (direct experience, inference, or conjecture) as well as how far in the past it was (recent past, distant past, or remote past). Hearsay and history/mythology are also marked in a distinctive way. For expressing certainty, Matsés has a particle ada/-da and a verb suffix -chit which mean something like "perhaps" and another particle, ba, that means something like "I doubt that..." Unfortunately for us, this doesn't seem more expressive than what English speakers typically say. I've only read a small fraction of Fleck's 1279-page thesis so it's possible that I missed something. I wrote a lengthier description of the evidential and epistemic modality system in Matsés at https://forum.effectivealtruism.org/posts/MYCbguxHAZkNGtG2B/matses-are-languages-providing-epistemic-certainty-of?commentId=yYtEWoHQEFuWCehWt.

Participants in the 2008 FHI Global Catastrophic Risk conference estimated a probability of extinction from nano-technology at 5.5% (weapons + accident) and non-nuclear wars at 3% (all wars - nuclear wars) (the values are on the GCR wikipedia page). In the Precipice, Ord estimated the existential risk of Other anthropogenic risks (noted in the text as including but not limited to nano-technology, and I interpret this as including non-nuclear wars) as 2% (1 in 50). (Note that by definition, extinction risk is a sub-set of existential risk.)


Since starting to engage with EA in 2018 I have seen very little discussion about nano-technology or non-nuclear warfare as existential risks, yet it seems that in 2008 these were considered risks on-par with top longtermist cause areas today (nanotechnology weapons and AGI extinction risks were both estimated at 5%). I realize that Ord's risk estimates are his own while the 2008 data is from a survey, but I assume that his views broadly represent those of his colleagues at FHI and others the GCR community.


My open question is: what new information or discussion over the last decade lead the GCR to reduce their estimate of the risks posed by (primarily) nano-technology and also conventional warfare?

I too find this an interesting topic. More specifically, I wonder why I've seen as little discussion published in the last few years (rather than from >10 years ago) of nanotech as I have. I also wonder about the limited discussion of things like very long-lasting totalitarianism - though there I don't have reason to believe people recently had reasonably high x-risk estimates; I just sort-of feel like I haven't yet seen good reason to deprioritise investigating that possible risk. (I'm not saying that there should be more discussion of these topics, and that there are no good reasons for the lack of it, just that I wonder about that.)

I realize that Ord's risk estimates are his own while the 2008 data is from a survey, but I assume that his views broadly represent those of his colleagues at FHI and others the GCR community.

I'm not sure that's a safe assumption. The 2008 survey you're discussing seems to have itself involved widely differing views (see the graphs on the last pages). And more generally, the existential risk and GCR research community seems to have widely differing views on risk estimates (see a collection of side-by-side estimates here).

I would also guess that each individual's estimates might themselves be relatively unstable from one time you ask them to another, or one particular phrasing of the question to another.

Relatedly, I'm not sure how decision-relevant differences of less than an order of magnitude between different estimates are. (Though such differences could sometimes be decision-relevant, and larger differences more easily could be.)

In case you hadn't seen it: 80,000 Hours recently released a post with a brief discussion of the problem area of atomically precise manufacturing. That also has links to a few relevant sources.

Thanks Michael, I had seen that but hadn't looked at the links. Some comments:

The cause report from OPP makes the distinction between molecular nanotechnology and atomically precise manufacturing. The 2008 survey seemed to be explicitly considering weaponised molecular nanotechnology as an extinction risk (I assume the nanotechnology accident was referring to molecular nanotechnology as well). While there seems to be agreement that molecular nanotechnology could be a direct path to GCR/extinction, OPP presents atomically precise manufacturing as being more of an indirect risk, such as through facilitating weapons proliferation. The Grey goo section of the report does resolve my question about why the community isn't talking about (molecular) nanotechnology as an existential risk as much now (the footnotes are worth reading for more details):

‘Grey goo’ is a proposed scenario in which tiny self-replicating machines outcompete organic life and rapidly consume the earth’s resources in order to make more copies of themselves.40 According to Dr. Drexler, a grey goo scenario could not happen by accident; it would require deliberate design.41 Both Drexler and Phoenix have argued that such runaway replicators are, in principle, a physical possibility, and Phoenix has even argued that it’s likely that someone will eventually try to make grey goo. However, they believe that other risks from APM are (i) more likely, and (ii) very likely to be relevant before risks from grey goo, and are therefore more worthy of attention.42 Similarly, Prof. Jones and Dr. Marblestone have argued that a ‘grey goo’ catastrophe is a distant, and perhaps unlikely, possibility.43

OPP's discussion on why molecular nanotechnology (and cryonics) failed to develop as scientific fields is also interesting:

First, early advocates of cryonics and MNT focused on writings and media aimed at a broad popular audience, before they did much technical, scientific work ...
Second, early advocates of cryonics and MNT spoke and wrote in a way that was critical and dismissive toward the most relevant mainstream scientific fields ...
Third, and perhaps largely as a result of these first two issues, these “neighboring” established scientific communities (of cryobiologists and chemists) engaged in substantial “boundary work” to keep advocates of cryonics and MNT excluded ...

It least in the case of molecular nanotechnology, the simple failure of the field to develop may have been lucky (at least from a GCR reduction perspective) as it seems that the research that was (at the time) most likely to lead to the risky outcomes was simply never pursued.

Update: Probably influenced a bit by this discussion, I've now made a tag for posts about Atomically Precise Manufacturing, as well as a link post (with commentary) for that Open Phil report.

I was recently reading the book Subvert! by Daniel Cleather (a colleague) and thought that this quote from Karl Popper and the author's preceding description of Popper's position sounded very similar to EAs method of cause prioritisation and theory of change in the world. (Although I believe Popper is writing in the context of fighting against threats to democracy rather than threats to well-being, humanity, etc.) I haven't read The Open Society and Its Enemies (or any of Popper's books for that matter), but I'm now quite interested to see if he draws any other parallels to EA.

For the philosophical point of view, I again lean heavily on Popper’s The Open Society and Its Enemies.  Within the book, he is sceptical of projects that seek to reform society based upon some grand utopian vision.  Firstly, he argues that such projects tend to require the exercise of strong authority to drive them.  Secondly, he describes the difficulty in describing exactly what utopia is, and that as change occurs, the vision of utopia will shift.  Instead he advocates for “piecemeal social engineering” as the optimal approach for reforming society which he describes as follows:
“The piecemeal engineer will, accordingly, adopt the method of searching for, and fighting against, the greatest and most urgent evils of society, rather than searching for, and fighting for, its greatest ultimate good.”

I also quite enjoyed Subvert! And would recommend that as a fresh perspective on the philosophy of science. A key point from the book is:

The problem is that in practice, scientists often adopt a sceptical, not a subversive, stance.  They are happy to scrutinise their opponents results when they are presented at conferences and in papers.  However, they are less likely to be actively subversive, and to perform their own studies to test their opponents’ theories.  Instead, they prefer to direct their efforts towards finding evidence in support of their own ideas.  The ideal mode would be that the proposers and testers of hypotheses would be different people.  In practice they end up being the same person.
Curated and popular this week
Relevant opportunities