OscarD

971 karmaJoined Working (0-5 years)Oxford, UK

Comments
154

Thanks, fixed. I was basing this off of Table 1 (page 20) in the original but I suppose Leopold meant the release year there.

fyi for everyone interested in Leopold's report but intimidated by it's length, I am currently writing a detailed summary, and expect to post it to the Forum in the next day or two. I will update this comment once I have done so.

I would be interested in @Greg_Colbourn's thoughts here! Possibly part of the value is in generating discussion and publicly defending a radical idea, rather than just the monetary EV. But if so maybe a smaller bet would have made sense.

When you say 'AI concerned' does that mean you would be interested in taking Greg's side of the bet (that everyone will die)? That is my interpretation, but the fact that you didn't say this explicitly makes me unsure.

Great to see this public demonstration of both of your respective beliefs!

Thanks for the comment (and welcome to the Forum! :) ). Yeah using conditional oughts seems like a pretty reasonable approach to me, though of course has some convenience cost when the Z is very widely shared ('you ought to fix your brakes over drive without brakes in order to not crash') so can perhaps then be implied.

Great post, and an interesting counterfactual history!

Hooray for moral trade.

Evolutionary debunking arguments feel relevant re the causal history of our beliefes.

One thing I have heard is that having long-ish application stages provides value by getting more people to think about relevant topics (I have heard this from at least two orgs I think). E.g. having several hundred people spend an hour writing a paragraphs about an AI safety topic might be good by virtue of generally having more people think more about this being good. I haven't seen a write-up weighing up the pros and cons of this though. I agree this can be bad for applicants.

NIce post!

We might then expect a lot of powerful attempts to change prevailing ‘human’ values, prior to the level of AI capabilities where we might have worried a lot about AI taking over the world. If we care about our values, this could be very bad. 

This seems like a key point to me, that it is hard to get good evidence on. The red stripes are rather benign, so we are in luck in a world like that. But if the AI values something in a more totalising way (not just satisficing with a lot of x's and red stripes being enough, but striving to make all humans spend all their time making x's and stripes) that seems problematic for us. Perhaps it depends how 'grabby' the values are, and therefore how compatible with a liberal, pluralistic, multipolar world.

Load more