MichaelA

I'm a Researcher and Writer for Convergence Analysis (https://www.convergenceanalysis.org/), an existential risk strategy research group. Here's Convergence's list of publications.

Posts of mine that were written for/with Convergence will mention that fact. In other posts, and in most of my comments, opinions expressed are my own.

I want to continually improve, so I welcome feedback of all kinds. You can give me feedback anonymously here: https://forms.gle/S189am4FJXnTqnks9

About half of my posts are on LessWrong: https://www.lesswrong.com/users/michaela

Some more info on my background and interests is here.

MichaelA's Comments

MichaelA's Shortform

There is now a Stanford Existential Risk Initiative, which (confusingly) describes itself as:

a collaboration between Stanford faculty and students dedicated to mitigating global catastrophic risks (GCRs). Our goal is to foster engagement from students and professors to produce meaningful work aiming to preserve the future of humanity by providing skill, knowledge development, networking, and professional pathways for Stanford community members interested in pursuing GCR reduction.

And they write:

What is a Global Catastrophic Risk?
We think of global catastrophic risks (GCRs) as risks that could cause the collapse of human civilization or even the extinction of the human species.

That is much closer to a definition of an existential risk (as long as we assume that the collapse is not recovered from) than of an global catastrophic risk. Given that fact and the clash between the term the initiative uses in its name and the term it uses when describing what they'll focus on, it appears this initiative is conflating these two terms/concepts.

This is unfortunate, and could lead to confusion, given that there are many events that would be global catastrophes without being existential catastrophes. An example would be a pandemic that kills hundreds of millions but that doesn't cause civilizational collapse, or that causes a collapse humanity later fully recovers from. (Furthermore, there may be existential catastrophes that aren't "global catastrophes" in the standard sense, such as "plateauing — progress flattens out at a level perhaps somewhat higher than the present level but far below technological maturity" (Bostrom).)

For further discussion, see Clarifying existential risks and existential catastrophes.

(I should note that I have positive impressions of the Center for International Security and Cooperation (which this initiative is a part of), that I'm very glad to see that this initiative has been set up, and that I expect they'll do very valuable work. I'm merely critiquing their use of terms.)

Genetic Enhancement as a Cause Area

Interesting post!

For instance, progress in hardware is arguably bottlenecked by economic demand, and would not be significantly accelerated by the advent of a hundred John von Neumann level scientists. However, deep insights into the nature of intelligence are the type of thing we should expect if we have a highly competent core group of humans working on the problem.

I'm not sure I understand the reasoning here. Does it not largely depend on what those hundred JVN-level scientists end up using their talents for? I.e., if they all happened to focus on making breakdowns in hardware (or in related areas, perhaps more theoretical/fundamental ones), wouldn't that mean that hardware would be considerably advanced? Or couldn't it be that they all focus on areas fairly unrelated to either hardware or the nature of intelligence, leaving us in roughly the same position we were in before?

(These questions are more sincere than rhetorical; I may just be missing something.)

How to generate research proposals
Also, I'd encourage you to write down your own research agenda in the form of a blogpost listing some open questions in this forum!

I'd (very belatedly) second that. You could also then comment about that list of questions in this central directory for open research questions. (Or even just write the whole list as a comment on that post to begin with, I guess.)

Why making asteroid deflection tech might be bad

(Just a tangential clarification) You write:

* An ‘existential threat’ typically refers to an event that could kill either all human life, or all life in general.

That describes an extinction risk, which is one type of existential risk, but not the only type. Here are two of the most prominent definitions of existential risk:

An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development (Bostrom, emphasis added)

And:

An existential risk is a risk that threatens the destruction of humanity’s longterm potential. (Ord, The Precipice, emphasis added)

(See here for more details. And here for definitions of "global catastrophic risks".)

Why making asteroid deflection tech might be bad
Asteroid impact mitigation is not thought to be the most pressing existential threat (e.g. artificial intelligence or global pandemics)

I agree with that. I've also created a database of existential risk estimates, which can give a sense of various people's views on which risks are greatest and how large the differences are. It also includes a few estimates relevant to asteroid risk.

Why making asteroid deflection tech might be bad
The idea that developing asteroid deflection technology is good is so entrenched in popular opinion that it seems like arguing for less or no spending in the area might be a bad idea. This seems like a similar situation to where AI safety researchers find themselves. Advocating for less funding and development of AI seems relatively intractable, so they instead work on solutions to make AI safer. Another similar example is that of pandemics research – it has obvious benefits in building resilience to natural pandemics, but may also enable a malicious or accidental outbreak of an engineered pathogen.

I'm not sure about this. I don't think I've ever heard about the idea that asteroid deflection technology would be good (or even about such technology at all) outside of EA. In contrast, potential benefits from AI are discussed widely, as are potential benefits from advanced medicine (and then to a lesser extent biotech advancements, and then maybe slightly pandemics research).

So I'm not sure if there is even widespread awareness of asteroid deflection technology, let alone entrenched views that it'd be good. This might mean pushing for differential progress in relation to this tech would be more tractable than that paragraph implies.

Why making asteroid deflection tech might be bad

Interesting post. I think the key points raised make sense to me.

I'll share a few quick thoughts in separate comments.

Firstly, this issue was briefly discussed in The Precipice by Toby Ord. Though I'm not sure if that discussion contained any important insights that this post missed.

Secondly, something that seemed (in my non-expert opinion) slightly odd about the discussion The Precipice, and that also seems applicable to this post, is the apparent focus just on how the benefits from being able to deflect asteroids away from the Earth compare to the risks from being able to deflect asteroids towards the Earth, without also discussing the risks just from a proliferation of additional nuclear explosives and related technologies. I.e., perhaps the explosives developed for use in deflection could just be used "directly" on targets on the Earth?

It's possible that there's a reason to not talk much about that side of things. Though recently I discovered that GCRI have a paper on that matter, which looks interesting, though I've only read the blog post summary.

gavintaylor's Shortform

I too find this an interesting topic. More specifically, I wonder why I've seen as little discussion published in the last few years (rather than from >10 years ago) of nanotech as I have. I also wonder about the limited discussion of things like very long-lasting totalitarianism - though there I don't have reason to believe people recently had reasonably high x-risk estimates; I just sort-of feel like I haven't yet seen good reason to deprioritise investigating that possible risk. (I'm not saying that there should be more discussion of these topics, and that there are no good reasons for the lack of it, just that I wonder about that.)

I realize that Ord's risk estimates are his own while the 2008 data is from a survey, but I assume that his views broadly represent those of his colleagues at FHI and others the GCR community.

I'm not sure that's a safe assumption. The 2008 survey you're discussing seems to have itself involved widely differing views (see the graphs on the last pages). And more generally, the existential risk and GCR research community seems to have widely differing views on risk estimates (see a collection of side-by-side estimates here).

I would also guess that each individual's estimates might themselves be relatively unstable from one time you ask them to another, or one particular phrasing of the question to another.

Relatedly, I'm not sure how decision-relevant differences of less than an order of magnitude between different estimates are. (Though such differences could sometimes be decision-relevant, and larger differences more easily could be.)

I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate.

Your paragraph on climate change denial among a smart, scientifically educated person reminded me of some very interesting work by a researcher called Dan Kahan.

An abstract from one paper:

Decision scientists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, and like issues that turn on empirical evidence. This paper describes a study of three of them: the predominance of heuristic-driven information processing by members of the public; ideologically motivated reasoning; and the cognitive-style correlates of political conservativism. The study generated both observational and experimental data inconsistent with the hypothesis that political conservatism is distinctively associated with either un- reflective thinking or motivated reasoning. Conservatives did no better or worse than liberals on the Cognitive Reflection Test (Frederick, 2005), an objective measure of information-processing dispositions associated with cognitive biases. In addition, the study found that ideologically motivated reasoning is not a consequence of over-reliance on heuristic or intuitive forms of reasoning generally. On the contrary, subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition. These findings corroborated an alternative hypothesis, which identifies ideologically motivated cognition as a form of information processing that promotes individuals’ interests in forming and maintaining beliefs that signify their loyalty to important affinity groups. The paper discusses the practical significance of these findings, including the need to develop science communication strategies that shield policy-relevant facts from the influences that turn them into divisive symbols of political identity.

Two other relevant papers:

I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate.

Parts of your comment reminded me of something that's perhaps unrelated, but seems interesting to bring up, which is Stefan Schubert's prior work on "argument-checking", as discussed on an 80k episode:

Stefan Schubert: I was always interested in “What would it be like if politicians were actually truthful in election debates, and said relevant things?” [...]
So then I started this blog in Swedish on something that I call argument checking. You know, there’s fact checking. But then I went, “Well there’s so many other ways that you can deceive people except outright lying.” So, that was fairly fun, in a way. I had this South African friend at LSE whom I told about this, that I was pointing out fallacies which people made. And she was like “That suits you perfectly. You’re so judge-y.” And unfortunately there’s something to that.
[...]
Robert Wiblin: What kinds of things did you try to do? I remember you had fact checking, this live fact checking on-
Stefan Schubert: Actually that is, we might have called it fact checking at some point. But the name which I wanted to use was argument checking. So that was like in addition to fact checking, we also checked argument.
Robert Wiblin: Did you get many people watching your live argument checking?
Stefan Schubert: Yeah, in Sweden, I got some traction. I guess, I had probably hoped for more people to read about this. But on the plus side, I think that the very top showed at least some interest in it. A smaller interest than what I had thought, but at least you reach the most influential people.
Robert Wiblin: I guess my doubt about this strategy would be, obviously you can fact check politicians, you can argument check them. But how much do people care? How much do voters really care? And even if they were to read this site, how much would it change their mind about anything?
Stefan Schubert: That’s fair. I think one approach which one might take would be to, following up on this experience, the very top people who write opinion pieces for newspapers, they were at least interested, and just double down on that, and try to reach them. I think that something that people think is that, okay, so there are the tabloids, and everyone agrees what they’re saying is generally not that good. But then you go to the the highbrow papers, and then everything there would actually make sense.
So that is what I did. I went for the Swedish equivalent of somewhere between the Guardian and the Telegraph. A decently well-respected paper. And even there, you can point out this glaring fallacies if you dig deeper.
Robert Wiblin: You mean, the journalists are just messing up.
Stefan Schubert: Yeah, or here it was often outside writers, like politicians or civil servants. I think ideally you should get people who are a bit more influential and more well-respected to realize how careful you actually have to be in order to really get to the truth.
Just to take one subject that effective altruists are very interested in, all the writings about AI, where you get people like professors who write the articles which are really very poor on this extremely important subject. It’s just outrageous if you think about it.
Robert Wiblin: Yeah, when I read those articles, I imagine we’re referring to similar things, I’m just astonished. And I don’t know how to react. Because I read it, and I could just see egregious errors, egregious misunderstandings. But then, we’ve got this modesty issue, that we’re bringing up before. These are well-respected people. At least in their fields in kind of adjacent areas. And then, I’m thinking, “Am I the crazy one?” Do they read what I write, and they have the same reaction?
Stefan Schubert: I don’t feel that. So I probably reveal my immodesty.
Of course, you should be modest if people show some signs of reasonableness. And obviously if someone is arguing for a position where your prior that it’s true is very low. But if they’re a reasonable person, and they’re arguing for it well, then you should update. But if they’re arguing in a way which is very emotive – they’re not really addressing the positions that we’re holding – then I don’t think modesty is the right approach.
Robert Wiblin: I guess it does go to show how difficult being modest is when the rubber really hits the road, and you’re just sure about something that someone else you respect just disagrees.
But I agree. There is real red flag when people don’t seem to be actually engaging with the substance of the issues which happens surprisingly often. They’ll write something, which just suggests, “I just don’t like the tone” or “I don’t like this topic” or “This whole thing makes me kind of mad” but they can’t explain why exactly.
Load More