37

0
0

Reactions

0
0
Comments24
Sorted by Click to highlight new comments since:

Sam's comment: https://twitter.com/sama/status/1790518031640347056

Jan Leike has also left: https://www.nytimes.com/2024/05/14/technology/ilya-sutskever-leaving-openai.html

Jan Leike, who ran the Super Alignment team alongside Dr. Sutskever, has also resigned from OpenAI. His role will be taken by John Schulman, another company co-founder.

 


 

This raises the concern of whether 80,000 Hours should still recommend people to join OpenAI.

Even if OpenAI has gone somewhat off the rails, should we want more or fewer safety-conscious people at OpenAI? I would imagine more.

I expect this was very much taken into account by the people that have quit, which makes their decision to quit anyway quite alarming.

Does this not imply that all the people who quit recently shouldn't have?

From an EA-perspective - yes, maybe.

But also it's a personal decision. If you're burnt out and fed up or you can't bear to support an organization you disagree with then you may be better off quitting.

Also, quitting in protest can be a way to convince an organization to change course. It's not always effective, but it's certainly a strong message to leadership that you disapprove of what they're doing which may at the very least get them thinking.

I've just thought of a counter-argument to my point. If OpenAI isn't safe it may be worth trying to ensure a safer AI lab (say Anthropic) wins the race to AGI. So it might be worth suggesting that talented people go to Anthropic rather than AGI, even if it is part of product or capabilities teams.

That sounds like the way OpenAI got started.

What are you suggesting? That if we direct safety conscious people to Anthropic that it will make it more likely that Anthropic will start to cut corners? Not sure what your point is.

Yes, that if we send people to Anthropic with the aim of "winning an AI arms race" that this will make it more likely that Anthropic will start to cut corners. Indeed, that is very close to the reasoning that caused OpenAI to exist and what seems to have caused it to cut lots of corners.

Hmm, I don't see why ensuring the best people go to Anthropic necessarily means they will take safety less seriously. I can actually imagine the opposite effect as if Anthropic catches up or even overtakes OpenAI then their incentive to cut corners should actually decrease because it's more likely that they can win the race without cutting corners. Right now their only hope to win the race is to cut corners.

Ultimately what matters most is what the leadership's views are. I suspect that Sam Altman never really cared that much about safety, but my sense is that the Amodeis do.

Yeah, I don't think this is a crazy take. I disagree with it based on having thought about it for many years, but yeah, I agree that it could make things better (though I don't expect it would and would instead make things worse).

Ultimately what matters most is what the leadership's views are.

I'm skeptical this is true particularly as AI companies grow massively and require vast amounts of investment.

It does seem important, but unclear it matters most.

How many safety-focused people have left since the board drama now? I count 7, but I might be missing more. Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Cullen O'Keefe, Pavel Izmailov, William Saunders.

This is a big deal. A bunch of the voices that could raise safety concerns at OpenAI when things really heat up are now gone. Idk what happened behind the scenes, but they judged now is a good time to leave.

Possible effective intervention: Guaranteeing that if these people break their NDA's, all their legal fees will be compensated for. No idea how sensible this is, so agree/disagree voting encouraged.

Legal fees may not be these individuals' big exposure (assuming they have non-disclosure / non-disparagement agreements). That would be damages for breaking the NDA, which could be massive depending on the effects on OpenAI's reputation. 

It seems as if the potential of the damages could make the vast majority of defendants "judgment-proof" (meaning they lack the assets to satisfy the judgment).

I wonder about the ethics of an organization that had the policy of financially supporting people (post-bankruptcy) who made potentially extremely high EV decisions that were personally financially ruinous.

I probably would be OK with that from an ethics standpoint. After all, I was not a party to the contracts in question. We celebrate (in appropriate circumstances) journalists who serve as conduits for actual classified information. Needless to say, I find the idea of being an enabler for the breach of contractual NDAs much less morally weighty than being an enabler for the breach of someone's oath to safeguard classified information.

Legally, such an organization would have to be careful to mitigate the risk of claims for tortious interference with contract and other theories that the AI company could come up with. Promising financial support prior to the leak might open the door for such claims; merely providing it (through a well-written trust) after the fact would probably be OK.

Shakeel provides a helpful list of all the people who have recently quit / been purged:

1. Ilya Sutskever 

2. Jan Leike 

3. Leopold Aschenbrenner 

4. Pavel Izmailov 

5. William Saunders 

6. Daniel Kokotajlo 

7. Cullen O'Keefe

https://twitter.com/ShakeelHashim/status/1790685752134656371

Worth noting he said he's "confident that OpenAI will build AGI that is both safe and beneficial under [current leadership]".

These kinds of resignation messages might not be very informative though. There are probably incentives to say nice things about each other.

He could have said different nice things or just left out the bit about safety. Do you think he's straightfowardly lying to the public about what he believes?

Or maybe he's just being (probably knowingly) misleading? "confident that OpenAI will build AGI that is both safe and beneficial" might mean 95% in safe beneficial AGI from OpenAI, and 5% it kills everyone.

Curated and popular this week
Relevant opportunities