This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work.
Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI."
Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps.
OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.")
Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up.
In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up.
Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through.
But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay.
(My guess is that Altman will still ma
I also once tried to create a map of s-risks here
I would add to your typology the s-risks of quantum immortality - an infinitely log timeline where a person is not dying from aging. Actually, I concluded now that such timeline will eventually end up in some form of AI resurrection, but such AI may be hostile.
Another type which should be mentioned is "hostile human augmentation", where neurons are rewired by hostile AI to make the victim feel even stronger and stronger pain. Which end up in tiling universe with anti-orgasmatronium.
Another distinction which should be mentioned is between a pure "pain s-risks" and s-risks where a person also have moral sufferings via understanding of her bad - and probably eternally worsening - situation (as in the story "I have no mouth but I must scream").