This is a linkpost for https://ssi.inc/

"We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

This way, we can scale in peace.

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

26

0
1

Reactions

0
1
Comments6
Sorted by Click to highlight new comments since:

Is this a disturbing pattern? Disgruntled Engineer leaves AI org and starts new one which claims to be more safety orientated than the last. Then the forces of the market, greed and power take over and we are left with another competitive player in the high stakes race.

Doesn't feel ideal but I'm not part of this scene

I don’t understand why we should trust Ilya after he played a very significant role in legitimising Sam’s return to OpenAI. If he had not endorsed this, the board’s resolve would’ve been a lot stronger. So I find it hard to believe him when he says ‘we will not bend to commercial pressures’, as in some sense, this is literally what he did.

More than commercial, my understanding from purely public documents is that it was societal pressures.

But I agree with you two on the spirit.

Co-founder Daniel Gross’ thoughts on AI safety are at best unclear beyond this statement. Here is an article he wrote a year ago: The Climate Justice of AI Safety, and he’s also appeared on the Stratechery podcast a few times and spoken about AI safety once or twice. In this space, he’s most well known as an investor, including in Leopold Aschenbrenner’s fund.

I think it would be good for Daniel Gross & Daniel Levy to clarify their positions on AI safety, and what exactly ‘commercial pressure’ means (do they just care about short-term pressure and intend to profit immensely from AGI?).

(Disclosure: I received a ~$10k grant from Daniel in 2019 that was AI-related)

Beware safety washing:

An increasing number of people believe that developing powerful AI systems is very dangerous, so companies might want to show that they are being “safe” in their work on AI.

Being safe with AI is hard and potentially costly, so if you’re a company working on AI capabilities, you might want to overstate the extent to which you focus on “safety.”

I wonder how do they plan to get GPUs at scale while remaining "insulated from short-term commercial pressures"

More from defun
Curated and popular this week
Relevant opportunities