This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work.
Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI."
Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps.
OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.")
Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up.
In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up.
Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through.
But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay.
(My guess is that Altman will still ma
I'd say we already have most of the solutions for climate change, they just need to be implemented (properly). AI could be of help for that, but the fossil fuel lobby could use it just as well, so I'm not sure if it would mean that it gets implemented.
A lot of people, also within EA and 80k hours, are very aware of the advantages that AI can bring. And that is also kind of the problem: there are a lot of incentives for capable AI to be developed quickly, but too little attention is currently paid to the things that can go wrong. 80k is trying to get people to work on making AI safer, hence they focus mainly on the things that can go wrong, instead of promoting and encouraging even faster (and less safe) development of AI.
I think you could say this about any problem. Instead of working on malaria prevention, freeing caged chickens or stopping climate change should we just all switch to working on AI so it can solve the problems for us?
I don't think so, because:
a. I think it's important to hedge bets and try out a range of things in case AI is many decades away or it doesn't work out
and
b. having lots more people working on AI won't necessarily make it come faster or better (already lots of people working on it).