Was community director of EA Netherlands, had to quit due to long covid
I have a background in philosophy,risk analysis, and moral psychology. I also did some x-risk research.
This looks evermore unlikely. I guess I didn't properly account for:
Nevertheless, I think speculating on internal politics can be a valuable exercise - being able to model the actions & power of strong bargainers (including bad faith ones) seems a valuable skill for EA.
Gwern had a really great comment on this, suggesting that Sam was planning a board coup and the independent directors managed to act first
Pure speculation (maybe just cope): this was all part of the board's plan. They knew they couldn't fire Altman without huge backlash. They wanted a stronger negotiation position to install a safety-conscious board & get more grip on Altman. They had no other tools. Perhaps they were willing to let OpenAI collapse if negotiations failed. Toner certainly mentioned that 'letting OpenAI collapse could be in line with the charter'. They expected to probably not maintain board seats themselves. They probably underestimated the amount of public backlash and may have made some tactical mistakes. Microsoft & OAI employees probably propped up Altman's bargaining power quite a bit.
We'll have to see what they ended up negotiating. I would be somewhat surprised if they didn't drag anything out of those negotiations
Note that the agreement is 'in principle'. The board hasn't yet given up its formal power (?)
I think a lot will still depend on the details of the negotiation: who will be on the new board and how safety-conscious will they be? The 4-person board had a strong bargaining chip: the potential collapse of OpenAI. They may have been able to leverage that (after all, it was a very credible threat after the firing: a costly signal!), or they may have been scared off by the reputational damage to EA & AI Safety of doing this. Altman & co. surely played that part well.
New York Times suggesting a more nuanced picture: https://archive.li/lrLzK
Altman was critical of Toner's recent paper, discussed outing her, and wanted to expand the board. The board disagreed on which people to add, leading to a stalemate. Ilya suddenly changed position, and the board took abrupt action.
They don't offer an explanation what the 'dishonesty' would've been about.
This is the paper in question, which I think will be getting a lot of attention now: https://cset.georgetown.edu/publication/decoding-intentions/
How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? AI technologies are evolving rapidly and enable a wide range of civilian and military applications. Private sector companies lead much of the innovation in AI, but their motivations and incentives may diverge from those of the state in which they are headquartered. As governments and companies compete to deploy evermore capable systems, the risks of miscalculation and inadvertent escalation will grow. Understanding the full complement of policy tools to prevent misperceptions and communicate clearly is essential for the safe and responsible development of these systems at a time of intensifying geopolitical competition.
In this brief, we explore a crucial policy lever that has not received much attention in the public debate: costly signals.
There's a decent history of the board changes in OpenAI here: https://loeber.substack.com/p/a-timeline-of-the-openai-board
I think the point that Toner & McCauley are conflicted because of OpenPhil/Holden's connections to Anthropic is a pretty weak argument. But the facts are all verified & pretty basic.
A number of things stand out:
I'm also very curious if anyone knows more how McCauley came on the board? And generally more information about her. I hadn't heard of her before and she's apparently an important player now (also in EA as EV UK board member).
Oh wow, that last paragraph seems like a good sign that they have good grounds for these statements they're not walking back
Thought this was a good article on Microsoft's power: https://archive.li/soZMQ
It is unclear if OpenAI could continue as a going concern without continual cash inflows from Microsoft. While OpenAI is, according to reports, making about $80 million per month currently and may be on track to make $1 billion in revenue in 2023—ten times more than it anticipated when it secured an additional $10 billion funding commitment from Microsoft in January—it is not known if the company is profitable or what its burn rate it is. But it is likely to be fast. The company lost $540 million dollars in 2022 on revenue of less than $30 million for the entire year, according to documents seen by Fortune. If its costs have also ramped up in line with revenues, the company would need continual support from Microsoft just to keep operating.
Furthermore, OpenAI is entirely dependent on Microsoft’s cloud computing datacenters to both train and run its models. The global shortage of graphic processing units (GPUs), the specialized computer chips needed to train and run large AI models, and the size of OpenAI’s business, with tens of millions of paying customers dependent on those models, mean that the San Francisco AI company cannot easily port its business to another cloud service provider.
I think it's premature to judge things based on the little information that's currently available. I would be surprised if there weren't reasons for the board's unconventional choices. (I'm not ruling it out though, that what you say ends up being right)
A pretty poor piece of journalism in my opinion. It gets a number of facts wrong. For example: