David_Kristoffersson

Co-founder and researcher at Convergence (http://convergenceanalysis.org).

Convergence does foundational existential risk strategy research.

Past: R&D Project Manager, Software Engineer.

David_Kristoffersson's Comments

State Space of X-Risk Trajectories

Happy to see you found it useful, Adam! Yes, general technological development corresponding to scaling of the vector is exactly the kind of intuition it's meant to carry.

State Space of X-Risk Trajectories

But beyond the trajectories (and maybe specific distances), are you planning on representing the other elements you mention? Like the uncertainty or the speed along trajectories?

Thanks for your comment. Yes; the other elements, like uncertainty, would definitely be part of further work on the trajectories model.

Differential progress / intellectual progress / technological development

I think that if I could unilaterally and definitively decide on the terms, I'd go with "differential technological development" (so keep that one the same), "differential intellectual development", and "differential development". I.e., I'd skip the word "progress", because we're really talking about something more like "lasting changes", without the positive connotations.

I agree, "development" seems like a superior word to reduce ambiguities. But as you say, this is a summary post, so it might not the best place to suggest switching up terms.

Here's two long form alternatives to "differential progress"/"differential development": differential societal development, differential civilizational development.

EA Survey 2019 Series: Geographic Distribution of EAs

The long term future is especially popular among EAs living in Oxford, not surprising given the focus of the Global Priorities Institute on longtermism

Even more than that, The Future of Humanity Institute has been in Oxford since 2005!

[AN #80]: Why AI risk might be solved without additional intervention from longtermists

I'm not arguing "AI will definitely go well by default, so no one should work on it". I'm arguing "Longtermists currently overestimate the magnitude of AI risk".

Thanks for the clarification Rohin!

I also agree overall with reallyeli.

[AN #80]: Why AI risk might be solved without additional intervention from longtermists

I'm sympathetic to many of the points, but I'm somewhat puzzled by the framing that you chose in this letter.

Why AI risk might be solved without additional intervention from longtermist

Sends me the message that longtermists should care less about AI risk.

Though, the people in the "conversations" all support AI safety research. And, from Rohin's own words:

Overall, it feels like there's around 90% chance that AI would not cause x-risk without additional intervention by longtermists.

10% chance of existential risk from AI sounds like a problem of catastrophic proportions to me. It implies that we need many more resources spent on existential risk reduction. Though perhaps not strictly on technical AI safety. Perhaps more marginal resources should be directed to strategy-oriented research instead.

The ‘far future’ is not just the far future

Good point, 'x-risk' is short and 'reduction' should be or should become implicit after some short steps of thinking. It will work well in many circumstances. For example, in "I work with x-risk", just as "I work with/in global poverty" works. Though some interjections that occur to me in the moment are: "the cause of x-risk" feels clumsy, "letter, dash, and then a word" feels like an odd construct, and it's a bit negatively oriented.

The ‘far future’ is not just the far future

Thank you for your thoughtful comment!

All work is future oriented Indeed. You don't tend to employ the word 'future' or emphasize it for most work though.

One alternative could be 'full future', signifying that it encompasses both the near and long term.

I think there should be space for new and more specific terms. 'Long term' has strengths, but it's overloaded with many meanings. 'Existential risk reduction' is specific but quite a mouthful; something shorter would be great. I'm working on another article where I will offer one new alternative.

On AI Weapons

Excellent analysis, thank you! The issue definitely needs a more nuanced discussion. The increasing automation of weaponry (and other technology) won't be stopped globally and pervasively, so we should endeavor to shape how it is developed and applied in a more positive direction.

Load More