All of David_Kristoffersson's Comments + Replies

BERI is doing an awesome service for university-affiliated groups, I hope more will take advantage of it!

6
Sean_o_h
3y
+1; BERI have been a brilliant support. Strongly recommend applying!

Would you really call Jakub's response "hostile"?

Thanks for posting this. I find it quite useful to get an overview of how the EA community is being managed and developed.

Happy to see the new institute take form! Thanks for doing this, Maxime and Konrad. International long-term governance appears very high-leverage to me. Good luck, and I'm looking forward to see more of your work!

  • Some "criticisms" are actually self-fulfilling prophecies
  • EAs are far too inclined to abandon high-EV ideas that are <50% likely to succeed
  • Over-relying on outside views over inside views.
  • Picking the wrong outside view / reference class, or not even considering the different reference classes on offer.

Strong upvote for these.

What I appreciate the most about this post is simply just the understanding it shows for people in this situation.

It's not easy. Everyone has their own struggles. Hang in there. Take some breaks. You can learn, you can try something slightly different, or something very different. Make sure you have a balanced life, and somewhere to go. Make sure you have good plan B's (e.g., myself, I can always go back to the software industry). In the for-profit and wider world, there are many skills you can learn better than you would working at an EA org.

Great idea and excellent work, thanks for doing this!

This gets me wondering what other kinds of data sources could be integrated (on some other platform, perhaps). And, I guess you could fairly easily do statistics to see big picture differences between the data on the different sites.

Thanks Linch; I actually missed that the prediction had closed!

3
Linch
4y
Yeah the Metaculus UI is not the most intuitive, I should flag this at some point.

Metaculus: Will quantum computing "supremacy” be achieved by 2025? [prediction closed on Jun 1, 2018.]

While I find it plausible that it will happen, I'm not personally convinced that quantum computers will be practically very useful due the difficulties in scaling them up.

Note that we believe that quantum supremacy has already been achieved.

As in, the quantum computer Sycamore from Google is capable of solving a (toy) problem that we currently believe unfeasible in a classical computer.

Of course, there is a more interesting question of when will we be able to solve practical problems using quantum computing. Experts believe that the median for a practical attack on modern crypto is ~2035.

I regardless believe that outside (and arguably within) quantum cryptanalysis the applications will be fairly limited.

The paper in my post... (read more)

Excellent points, Carl. (And Stefan's as well.) We would love to see follow-up posts exploring nuances like these, and I put them into the Convergence list of topics worth elaborating.

Sounds like you got some pretty great engagement out of this experiment! Great work! This exact kind of project, and the space of related ideas seems well worth exploring further.

The five people that we decided to reject were given feedback about their translations as well as their motivation letters. We also provided two simple call-to-actions to them: (1) read our blog and join our newsletter, and (2) follow our FB page and attend our public events. None of these five people have so far done these actions to our awareness.

Semi-general comment regardi

... (read more)

Variant of Korthon's comment:

I never look at the "forum favorites" section. It seems like it's looked the same forever and it takes up a lot of screen real estate without any use for me!

I just updated this section and it now shows randomized posts.

6
Habryka
4y
Same is true for me (as the person who built the feature). On LessWrong the recommendations are randomized but for some reason on the EA Forum the admins/devs decided to always have them be strictly ordered by the latest highest karma posts you haven’t read, so they never change, and inevitably end up in a configuration where you’re not interested in any of the posts.

Vision of Earth fellows Kyle Laskowski and Ben Harack had a poster session on this topic at EA Global San Francisco 2019: https://www.visionofearth.org/wp-content/uploads/2019/07/Vision-of-Earth-Asteroid-Manipulation-Poster.pdf

They were also working on a paper on the topic.

2
MichaelDello
4y
Neat, I'll have to get in touch, thanks.

Thank you for this article, Michael! I like seeing the different mainline definitions of existential risk and catastrophe alongside each other, and having some common misunderstandings clarified.

Just a minor comment:

That said, at least to me, it seems that “destruction of humanity’s longterm potential” could be read as meaning the complete destruction. So I’d personally be inclined to tweak Ord’s definitions to:

  • An existential catastrophe is the destruction of the vast majority of humanity’s long-term potential.
  • An existential risk is a risk that threat
... (read more)

I think this is an excellent initiative, thank you, Michael! (Disclaimer: Michael and I work together on Convergence.)

An assortment of thoughts:

  • More and more studious estimates of x-risks seem clearly very high value to me due to how much the likelihood of risks and events affect priorities and how the quality of the estimates affect our communication about these matters.
  • More estimates should generally should increase our common knowledge of the risks, and individually, if people think about how to make these estimates, they will reach a deeper understa
... (read more)
5
MichaelA
4y
I strongly agree about the value of breaking down the causes of one's estimates, and about estimates building on new sources of info being particularly interesting. And I tentatively agree with your other points. Two things I'd add: * Beard et al. have an interesting passage relevant to the idea of "Breaking down the causes of one's estimates": * I think an intro post on how to do estimates of this type better could be valuable. I also think it would likely benefit by drawing on the insights in (among other things) the sources I linked to in this sentence: "Some discussion of good techniques for forecasting, which may or may not apply to such long-range and extreme-outcome forecasts, can be found here, here, here, here, and here." And Beard et al. is also relevant, though much of what it covers might be hard for individual forecasters to implement with low effort.

This kind of complexity tells me that we should talk more often of risk %'s in terms of the different scenarios they are associated with. E.g., the form of current trajectory Ord is using, and also possibly better (if society would act further more wisely) and possible worse trajectories (society makes major mistakes), and what the probabilities are under these.

We can't disentangle talking about future risks and possibilities entirely from the different possible choices of society since these choices are what shapes the future. What we do affect these choices.

(Also, maybe you should edit the original post to include the quote you included here or parts of it.)

Happy to see you found it useful, Adam! Yes, general technological development corresponding to scaling of the vector is exactly the kind of intuition it's meant to carry.

But beyond the trajectories (and maybe specific distances), are you planning on representing the other elements you mention? Like the uncertainty or the speed along trajectories?

Thanks for your comment. Yes; the other elements, like uncertainty, would definitely be part of further work on the trajectories model.

I think that if I could unilaterally and definitively decide on the terms, I'd go with "differential technological development" (so keep that one the same), "differential intellectual development", and "differential development". I.e., I'd skip the word "progress", because we're really talking about something more like "lasting changes", without the positive connotations.

I agree, "development" seems like a superior word to reduce ambiguities. But as you say, this is a summary post, so it might not the best place to suggest switching up terms.

Here's two

... (read more)

Thanks Tobias, I think you make a really good point! You're definitely right that there are some in the cause area who don't think the technological transformation is likely.

I don't think you've established that the 'technological transformation' is essential.

What I wanted to say with this post is that it's essential to the view of a large majority in the cause area. The article is not really meant to do a good job at arguing that it should be essential to peoples' views.

It's possible I'm wrong about the size of the majority; but this was definitely my

... (read more)

The long term future is especially popular among EAs living in Oxford, not surprising given the focus of the Global Priorities Institute on longtermism

Even more than that, The Future of Humanity Institute has been in Oxford since 2005!

2
Neil_Dullaghan
4y
Good point! Thanks. I have added FHI to the text.

I'm not arguing "AI will definitely go well by default, so no one should work on it". I'm arguing "Longtermists currently overestimate the magnitude of AI risk".

Thanks for the clarification Rohin!

I also agree overall with reallyeli.

I'm sympathetic to many of the points, but I'm somewhat puzzled by the framing that you chose in this letter.

Why AI risk might be solved without additional intervention from longtermist

Sends me the message that longtermists should care less about AI risk.

Though, the people in the "conversations" all support AI safety research. And, from Rohin's own words:

Overall, it feels like there's around 90% chance that AI would not cause x-risk without additional intervention by longtermists.

10% chance of existential risk from AI sounds like a problem of catas

... (read more)
8
Rohin Shah
4y
I do believe that, and so does Robin. I don't know about Paul and Adam, but I wouldn't be surprised if they thought so too. Well, it's unclear if Robin supports AI safety research, but yes, the other three of us do. This is because: (Though I'll note that I don't think the 10% figure is robust.) I'm not arguing "AI will definitely go well by default, so no one should work on it". I'm arguing "Longtermists currently overestimate the magnitude of AI risk". I also broadly agree with reallyeli: And this really does have important implications: if you believe "non-robust 10% chance of AI accident risk", maybe you'll find that biosecurity, global governance, etc. are more important problems to work on. I haven't checked myself -- for me personally, it seems quite clear that AI safety is my comparative advantage -- but I wouldn't be surprised if on reflection I thought one of those areas was more important for EA to work on than AI safety.
3
Eli Rose
4y
I had the same reaction (checking in my head that a 10% chance still merited action). However I really think we ought to be able to discuss guesses about what's true merely on the level of what's true, without thinking about secondary messages being sent by some statement or another. It seems to me that if we're unable to do so, that will make the difficult task of finding truth even more difficult.

Good point, 'x-risk' is short and 'reduction' should be or should become implicit after some short steps of thinking. It will work well in many circumstances. For example, in "I work with x-risk", just as "I work with/in global poverty" works. Though some interjections that occur to me in the moment are: "the cause of x-risk" feels clumsy, "letter, dash, and then a word" feels like an odd construct, and it's a bit negatively oriented.

Thank you for your thoughtful comment!

All work is future oriented Indeed. You don't tend to employ the word 'future' or emphasize it for most work though.

One alternative could be 'full future', signifying that it encompasses both the near and long term.

I think there should be space for new and more specific terms. 'Long term' has strengths, but it's overloaded with many meanings. 'Existential risk reduction' is specific but quite a mouthful; something shorter would be great. I'm working on another article where I will offer one new alternative.

3
MichaelStJules
4y
Isn't just "x-risk" okay? Or is too much lost in the abbreviation? I suppose people might confuse it for extinction risks specifically, instead of existential risks generally, but you could write it out as "existential risks (x-risks)" or "x-risks (existential risks)" the first time in an article. Also, "reduction" seems kind of implicit due to the negative connotations of the word "risk" (you could reframe as "existential opportunities" if you wanted to flip the connotation). No one working on global health and poverty wants to make people less healthy or poorer, and no one working on animal welfare wants to make animals suffer more.

Excellent analysis, thank you! The issue definitely needs a more nuanced discussion. The increasing automation of weaponry (and other technology) won't be stopped globally and pervasively, so we should endeavor to shape how it is developed and applied in a more positive direction.

Indeed! We hope we can deliver that sooner rather than later. Though foundational research may need time to properly come to fruition.

Thanks for your detailed comment, Max!

Relative to my own intuitions, I feel like you underestimate the extent to which your "spine" ideally would be a back-and-forth between its different levels 

I agree, the "spine" glosses over a lot of the important dynamics.

I think I would find it easier to understand to what extent I agree with your recommendations if you gave specific examples of (i) what you consider to be valuable past examples of strategy research, and (ii) how you're planning to do strategy research going forward (or what
... (read more)
5
Max_Daniel
5y
Thank you for your response, David! One quick observation: I agree that the current idea cluster of existential risk reduction was formed through research. However, it seems that one key difference between our views is: you seem to be optimistic that future research of this type (though different in some ways, as you say later) would uncover similarly useful insights, while I tend to think that the space of crucial considerations we can reliably identify with this type of research has been almost exhausted. (NB I think there are many more crucial considerations "out there", it's just that I'm skeptical we can find them.) If this is right, then it seems we actually make different predictions about the future, and you could prove me wrong by delivering valuable strategy research outputs within the next few years.

Fiscal sponsorship can be very helpful for new groups!

Though regarding attorney fees:

Official nonprofit status can take many months to get in the US, and cost $10-30k of attorney fees.

Where are you getting this from? Attorney fees are on the order of $2-5k.

https://nonprofitelite.com/how-much-will-it-cost-to-get-501c3-tax-exempt-2/

CPA’s and attorneys who specialize in nonprofit organizations routinely charge $2,500–$5,000 for preparation of IRS Form 1023 applications for small organizations, and $6,000-$15,000 for more complex ventures. 

The following two f... (read more)

2
Ozzie Gooen
5y
Interesting. This came from chats I had with an attorney. That said, they were based in SF, so maybe their prices were higher. I also asked how much it would cost to do "everything", which I think meant more than strictly file the IRS Form 1023. I believe there's a lot of work that could be done by either yourself or the attorney, and I would hope that in many cases we could generally lean more on the attorney for that work.

Good points.

Perhaps funding organizations would like better ways of figuring out the risks of supporting new projects? I think valuable work could be done here.

One way how to think about it* is projecting the space along two axes: "project size" and "risks/establishedness".

Justin Shovelain came up with that. (Justin and I were both on the strategy team of AISC 1.)