Co-founder of and researcher at Convergence. Convergence does foundational existential risk strategy research. See here for our growing list of publications.

Past: R&D Project Manager, Software Engineer.

David_Kristoffersson's Comments

Why making asteroid deflection tech might be bad

Vision of Earth fellows Kyle Laskowski and Ben Harack had a poster session on this topic at EA Global San Francisco 2019: https://www.visionofearth.org/wp-content/uploads/2019/07/Vision-of-Earth-Asteroid-Manipulation-Poster.pdf

They were also working on a paper on the topic.

Clarifying existential risks and existential catastrophes

Thank you for this article, Michael! I like seeing the different mainline definitions of existential risk and catastrophe alongside each other, and having some common misunderstandings clarified.

Just a minor comment:

That said, at least to me, it seems that “destruction of humanity’s longterm potential” could be read as meaning the complete destruction. So I’d personally be inclined to tweak Ord’s definitions to:

  • An existential catastrophe is the destruction of the vast majority of humanity’s long-term potential.
  • An existential risk is a risk that threatens the destruction of the vast majority of humanity’s long-term potential.[4]

Ord was presumably going for brevity in his book, and I think his definition succeeds quite well! I don't think generally adding 4 words to Ord's short nice definition would be worth it. There's other details that could be expanded on as well (like how we can mostly consider the definition in Bostrom 2012 to be a more expanded one). Expanding helps with discussing a particular point, though.

Database of existential risk estimates

I think this is an excellent initiative, thank you, Michael! (Disclaimer: Michael and I work together on Convergence.)

An assortment of thoughts:

  • More and more studious estimates of x-risks seem clearly very high value to me due to how much the likelihood of risks and events affect priorities and how the quality of the estimates affect our communication about these matters.
  • More estimates should generally should increase our common knowledge of the risks, and individually, if people think about how to make these estimates, they will reach a deeper understanding of the questions.
  • Breaking down the causes of one's estimates is generally valuable. It allows one to improve one's estimates, understanding of causation, and to discuss them in more detail.
  • More estimates can be bad if low quality estimates swamp out better quality ones somehow.
  • Estimates building on new (compared to earlier estimates) sources of information are especially interesting. Independent data sources increase our overall knowledge.
  • I see space for someone writing an intro post on how to do estimates of this type better. (Scott Alexander's old posts here might be interesting.)
My thoughts on Toby Ord’s existential risk estimates

This kind of complexity tells me that we should talk more often of risk %'s in terms of the different scenarios they are associated with. E.g., the form of current trajectory Ord is using, and also possibly better (if society would act further more wisely) and possible worse trajectories (society makes major mistakes), and what the probabilities are under these.

We can't disentangle talking about future risks and possibilities entirely from the different possible choices of society since these choices are what shapes the future. What we do affect these choices.

(Also, maybe you should edit the original post to include the quote you included here or parts of it.)

State Space of X-Risk Trajectories

Happy to see you found it useful, Adam! Yes, general technological development corresponding to scaling of the vector is exactly the kind of intuition it's meant to carry.

State Space of X-Risk Trajectories

But beyond the trajectories (and maybe specific distances), are you planning on representing the other elements you mention? Like the uncertainty or the speed along trajectories?

Thanks for your comment. Yes; the other elements, like uncertainty, would definitely be part of further work on the trajectories model.

Differential progress / intellectual progress / technological development

I think that if I could unilaterally and definitively decide on the terms, I'd go with "differential technological development" (so keep that one the same), "differential intellectual development", and "differential development". I.e., I'd skip the word "progress", because we're really talking about something more like "lasting changes", without the positive connotations.

I agree, "development" seems like a superior word to reduce ambiguities. But as you say, this is a summary post, so it might not the best place to suggest switching up terms.

Here's two long form alternatives to "differential progress"/"differential development": differential societal development, differential civilizational development.

EA Survey 2019 Series: Geographic Distribution of EAs

The long term future is especially popular among EAs living in Oxford, not surprising given the focus of the Global Priorities Institute on longtermism

Even more than that, The Future of Humanity Institute has been in Oxford since 2005!

[AN #80]: Why AI risk might be solved without additional intervention from longtermists

I'm not arguing "AI will definitely go well by default, so no one should work on it". I'm arguing "Longtermists currently overestimate the magnitude of AI risk".

Thanks for the clarification Rohin!

I also agree overall with reallyeli.

Load More