Sam's comment: https://twitter.com/sama/status/1790518031640347056
Jan Leike has also left: https://www.nytimes.com/2024/05/14/technology/ilya-sutskever-leaving-openai.html
Jan Leike, who ran the Super Alignment team alongside Dr. Sutskever, has also resigned from OpenAI. His role will be taken by John Schulman, another company co-founder.
I expect this was very much taken into account by the people that have quit, which makes their decision to quit anyway quite alarming.
From an EA-perspective - yes, maybe.
But also it's a personal decision. If you're burnt out and fed up or you can't bear to support an organization you disagree with then you may be better off quitting.
Also, quitting in protest can be a way to convince an organization to change course. It's not always effective, but it's certainly a strong message to leadership that you disapprove of what they're doing which may at the very least get them thinking.
I've just thought of a counter-argument to my point. If OpenAI isn't safe it may be worth trying to ensure a safer AI lab (say Anthropic) wins the race to AGI. So it might be worth suggesting that talented people go to Anthropic rather than AGI, even if it is part of product or capabilities teams.
Yes, that if we send people to Anthropic with the aim of "winning an AI arms race" that this will make it more likely that Anthropic will start to cut corners. Indeed, that is very close to the reasoning that caused OpenAI to exist and what seems to have caused it to cut lots of corners.
Hmm, I don't see why ensuring the best people go to Anthropic necessarily means they will take safety less seriously. I can actually imagine the opposite effect as if Anthropic catches up or even overtakes OpenAI then their incentive to cut corners should actually decrease because it's more likely that they can win the race without cutting corners. Right now their only hope to win the race is to cut corners.
Ultimately what matters most is what the leadership's views are. I suspect that Sam Altman never really cared that much about safety, but my sense is that the Amodeis do.
Ultimately what matters most is what the leadership's views are.
I'm skeptical this is true particularly as AI companies grow massively and require vast amounts of investment.
It does seem important, but unclear it matters most.
How many safety-focused people have left since the board drama now? I count 7, but I might be missing more. Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Cullen O'Keefe, Pavel Izmailov, William Saunders.
This is a big deal. A bunch of the voices that could raise safety concerns at OpenAI when things really heat up are now gone. Idk what happened behind the scenes, but they judged now is a good time to leave.
Possible effective intervention: Guaranteeing that if these people break their NDA's, all their legal fees will be compensated for. No idea how sensible this is, so agree/disagree voting encouraged.
It seems as if the potential of the damages could make the vast majority of defendants "judgment-proof" (meaning they lack the assets to satisfy the judgment).
I wonder about the ethics of an organization that had the policy of financially supporting people (post-bankruptcy) who made potentially extremely high EV decisions that were personally financially ruinous.
I probably would be OK with that from an ethics standpoint. After all, I was not a party to the contracts in question. We celebrate (in appropriate circumstances) journalists who serve as conduits for actual classified information. Needless to say, I find the idea of being an enabler for the breach of contractual NDAs much less morally weighty than being an enabler for the breach of someone's oath to safeguard classified information.
Legally, such an organization would have to be careful to mitigate the risk of claims for tortious interference with contract and other theories that the AI company could come up with. Promising financial support prior to the leak might open the door for such claims; merely providing it (through a well-written trust) after the fact would probably be OK.
Worth noting he said he's "confident that OpenAI will build AGI that is both safe and beneficial under [current leadership]".
He could have said different nice things or just left out the bit about safety. Do you think he's straightfowardly lying to the public about what he believes?
Or maybe he's just being (probably knowingly) misleading? "confident that OpenAI will build AGI that is both safe and beneficial" might mean 95% in safe beneficial AGI from OpenAI, and 5% it kills everyone.
https://x.com/janleike/status/1791498174659715494
Jan Leike left this thread on why he resigned.
This is niche content and i'm 100% here for it.