All of KR's Comments + Replies

KR
4y3
0
0

My impression is that people like you are pretty rare, but all of this is based off subjective impressions and I could be very wrong.

Have you met a lot of other people who came to AI safety from some background other than the Yudkowsky/Superintelligence cluster?

7
Geoffrey Irving
4y
Well, part of my job is making new people that qualify, so yes to some extent. This is true both in my current role and in past work at OpenAI (e.g., https://distill.pub/2019/safety-needs-social-scientists).
KR
4y2
0
0

My understanding of the hinge of history argument is that the current time has more leverage than either the past or future. Even if that's true, it doesn't necessarily mean that it's any more obvious what needs to be done to influence the future.

If I believed that e.g. AI is obviously the most important lever right now, and think I know which direction to push that lever, I would ask myself "using the same reasoning, which levers would I be trying to push where in 1920". As far as I can tell this is pretty agnostic about how easy it is to push these levers around, just which you would want to be pushing.

KR
4y3
0
0

Thanks! I ended up expanding it significantly and posting the full version here.

KR
4y13
0
0

Thought experiment for longtermism: if you were alive in 1920 trying to have the largest possible impact today, would the ideas you came up with without the benefit of hindsight still have an effect today?

I find this a useful intuition pump in general. If someone says "X will happen in 50 years" I think of myself looking at 2020 from 1970, asking how many of that sort of prediction I made then would have been accurate now. The world in 50 years is going to be at least as hard to imagine (hopefully more, given exponential growth) to us as the world of today would have from 1970. What did we know? What did we completely miss? What kinds of systematic mistakes might we be making?

6
Prabhat Soni
4y
I may have misunderstood your question, so there's a chance that this is a tangential answer. I think one mistake humans make is overconfidence in specific long-term predictions. Specific would mean like predicting when a particular technology will arrive, when we will hit 3 degrees of warming, when we will hit 11 billion population, etc. I think the capacity of even smart humans to reasonably (e.g. >50% accuracy) predict when a specific event would occur is somewhat low; I would estimate around 20-40 years from when they are living. You ask: "if you were alive in 1920 trying to have the largest possible impact today" what would you do? I would acknowledge that I cannot (with reasonable accuracy) predict the thing that will "the largest possible impact in 2020" (which is a very specific thing to predict) and go with broad-based interventions (which is a more sure-shot answer) like improving international relations, promoting moral values, promoting education, promoting democracy, promoting economic growth, etc (these are sub-optimal answers; but they're probably the best I could do).
2
Buck
4y
I'd be interested to see a list of what kinds of systematic mistakes previous attempts at long-term forecasting made. Also, I think that many longtermists (eg me) think it's much more plausible to successfully influence the long run future now than in the 1920s, because of the hinge of history argument.
KR
4y1
0
0

Thanks for the links, I googled briefly before I wrote this to check my memory and couldn't find anything. I think what formed my impression was that even in very detailed conversations/writing on AI, as far as I could tell by default there was no mention or implicit acknowledgement of the possibility. On reflection I'm not sure if I would expect it to be even if people did think it was likely, though.

KR
4y1
0
0

EA-style discussion about AI seems to dismiss out of hand the possibility that AI might be sentient. I can’t find an example, but the possibility seems generally scoffed at in the same tone people dismiss Skynet and killer robot scenarios. Bostrom’s simulation hypothesis, however, is broadly accepted as at the very least an interestingly plausible argument.

These two stances seem entirely incompatible - if silicon can create a whole world inside of which are sentient minds, why can’t it just create the minds with no need for the framing... (read more)

3
Aaron Gertler
4y
Many years ago, Eliezer Yudkowsky shared a short story I wrote (related to AI sentience) with his Facebook followers. The story isn't great -- I bring it up here only as an example of people being interested in these questions.
Buck
4y14
0
0

I think there are many examples of EAs thinking about the possibility that AI might be sentient by default. Some examples I can think of off the top of my head:

... (read more)
KR
4y5
0
0

An argument in favor of slow takeoff scenarios being generally safer is that we will get to see and experiment with the precursor AIs before they become capable of causing x-risks. Even if the behavior of this precursor AI is predictive of the superhuman AI’s, our ability to use it depends on the reaction to the potential dangers of this precursor AI. A society confident that there is no danger from increasing the capabilities of the machine that has been successfully running its electrical grid gains much less of an advantage from a slow takeoff (a... (read more)

2
Aaron Gertler
4y
I found this interesting, and I think it would be worth expanding into a full post if you felt like it!  I don't think you'd need more content: just a few more paragraph breaks, maybe a brief summary, and maybe a few questions to guide responses. If you have questions you'd want readers to tackle, consider including them as comments after the post.