This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work.
Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI."
Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps.
OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.")
Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up.
In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up.
Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through.
But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay.
(My guess is that Altman will still ma
These are some interesting thoughts.
I think OSINT is a good method for varying types of enforcement, especially because the general public can aid in the gathering of evidence to send to regulators. This happens a lot in the animal welfare industry AFAIK, though someone with experience here please feel free to correct me. I know Animal Rising recently used OSINT to gather evidence of 280 legal breaches from the livestock industry which they handed to DEFRA which is pretty cool. This is especially the case given that these were RSPCA-endorsed farms so it showed that the stakeholder vetting (pun unintended) was failing. This only happened 3 days ago, so the link may expire, but here is an update.
For AI this is often a bit less effective, but is still useful. A lot of the models in nuclear, policing, natsec, defence, or similar are likely to be protected in a way that makes OSINT difficult, but I've used it before for AI Governance impact. The issue is that even if you find something, a DSMA-Notice or similar can be used to stop publication. You said "Information on AI development gathered through OSINT could be misused by actors with their own agenda" which is almost word for word the reason the data is often, haha. So you're 100% right that in AI Governance in these sectors OSINT can be super useful but may fall at later hurdles.
However, commercial AI is much more prone to OSINT because there's no real lever to stop you publishing OSINT information. You can usually in my experience use the supply chain for a fantastic source of OSINT, depending on how dedicated you are. That's been a major AI Governance theme in the instances I've been involved in on both sides of this.