AC

Arthur Conmy

6 karmaJoined Jul 2021

Comments
8

I think this post provides some pretty useful arguments about the downsides of pausing AI development. I feel noticeably more pessimistic about a pause going well having read this.

However, I don't agree with some of the arguments about alignment optimism and think they're a fair bit weaker

When it comes to AIs, we are the innate reward system

Sure, we can use RLHF/related techniques to steer AI behavior. Further,

[gradient descent] is almost impossible to trick 

Sure, unlike in most cases in biology, ANN updates do act on the whole model without noise etc. 

But there are worries about what happens when AIs get predictably harder to evaluate as they reach superhuman performance on more tasks that are still very real given all of this! You mention scalable oversight research so it's clear you are aware that this is an open problem, but I don't think this post emphasises enough how most alignment work recognises a pretty big difference between aligning subhuman systems and superhuman systems, which limits the optimism you can get from GPT-4 seeming basically aligned. I think it's possible that with tons of compute and aligned weaker AIs (as you touch upon) we can generalize to aligned GPT-5, GPT-6 etc. But this feels like a pretty different paradigm to the various analogies to the natural world and the current state of alignment!

On a macro-level you could consider extreme AI Safety asks followed by moderate asks to be an example of the Door-in-the-face technique (which has a psychological basis and seems to have replicated)

CC https://www.lesswrong.com/posts/fqryrxnvpSr5w2dDJ/touch-reality-as-soon-as-possible-when-doing-machine that expands on "hands-on" experience in alignment. 

I don't know of any writing that directly contradicts these claims. I think https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/3pinFH3jerMzAvmza indirectly contradicts these claims as it broadly criticizes most empirical approaches and is more open to conceptual approaches.

For capabilities things, https://dblalock.substack.com/ is pretty good (though some things the author is very excited about I find underwhelming).

EDIT: weekly quick summaries of papers

There are some recent posts, for example this one that are just the intro and outro (22 seconds long) and miss the main post. Would be great if this bug could be fixed.

What were/are your basic and relevant questions? What were AIS folks missing?

I liked this post because of I've been thinking about similar issues recently, but find some of the conclusions strange. For example, isn't there a "generalised trolley problem" for any deontologist who asserts that rule X should be followed:

Aha! So you follow rule X? Well what if I told you that person over there will violate rule X twice unless you break rule X in the next 5 minutes?

?

Why is this relevant? I don't think at this point the deontologist holds up their hands upon hearing any example of the above and denounces their theory. I think they add another rule that allows them to violate their former rule*. I think more needs to be done to prove that the boundary cases for utilitarianism are wild, but they are not  out of the ordinary for deontological ethics.

* and I see this as about as wild as when the utilitarian doesn't voluntarily harvest organs because of "societal factors", and has to add this  to their utility function (here: https://www.utilitarianism.net/objections-to-utilitarianism/rights)

This is great and you should make a LW post; these are in a really nice format for shunting around. 

As a small nit: any idea why the first few essays of the Codex (https://www.lesswrong.com/codex) are not here?