This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work.
Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI."
Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps.
OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.")
Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up.
In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up.
Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through.
But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay.
(My guess is that Altman will still ma
Remember to give feedback on the newsletter here: https://docs.google.com/forms/d/1QNmSdB4C5VlqiHZT0ucMhmSqQhXtgHPgHoqAtl0w2K8/viewform :)
I'm new to the EA Forum. It was suggested to me that I crosspost this LessWrong post criticizing Jeff Kaufman's speech at EA Global 2015 entitled 'Why Global Poverty?' on the EA forum, but I need 5 karma to make my first post.
EDIT: Here it is.
"And I would argue that any altruist is doing the same thing when they have to choose between causes before they can make observations. There are a million other things that the founders of the Against Malaria Foundation could have done, but they took the risk of riding on distributing bed nets, even though they had yet to see it actually work."
This point should be rewritten, I think. I'm not sure what the "it" here you're talking about actually is.
Sorry about the confusion, I mean to say that even though the Against Malaria Foundation observes evidence of the effectiveness of its interventions all of the time, and this is good, the founders of the Against Malaria Foundation had to choose an initial action before they had made any observations about the effectiveness of their interventions. Presumably, there was some first village or region of trial subjects that first empirically demonstrated the effectiveness of durable, insecticidal bednets. But before this first experiment, the AMF also presumably had to rely merely on correct reasoning without corroborative observations to support their arguments. Nonetheless, their reasoning was correct. Experiment is a way to increase our confidence in our reasoning, and it is good to use it when it's available, but we can have confidence at times without it. I use these points to argue that people successfully reason without being able to test the effectiveness of their actions all of the time, and that they often have to.
The more general point is that people often use a very simple heuristic to decide whether or not something academic is worthy of interest: Is it based on evidence and empirical testing? 'Evidence-based medicine' is synonymous with 'safe, useful medicine,' depending on who you ask. Things are bad if they are not based on evidence. But in the case of existential risk interventions, it is a property of the situation that we cannot empirically test the effectiveness of our interventions. It is thus necessary to reason without conducting empirical tests. This is a reason to take the problem more seriously, for its difficulty, as opposed to the reaction of some others, which is that the 'lack of evidence-based methods' is some sort of point against trying to solve the problem anyway.
And in the case of some risks, like AI, it is actually dangerous to conduct empirical testing. It's plausible that sufficiently intelligent unsafe AIs would mimic safe AIs until they gain a decisive strategic advantage. See Bostrom's 'treacherous turn' for more on this.
This is an interesting discussion, people listing high earning careers which're comparatively easy to get: https://www.facebook.com/groups/effective.altruists/permalink/1002743319782025/
Or rather: people failing to list high earning careers that are comparatively easy to get.
I think popularizing earning-to-give among persons who already are in high-income professions or career trajectories is a very good strategy. But as a career advice for young people interested in EA, it seems to be of rather limited utility.
What luck have the big EA charities (GiveWell and CEA come to mind as the obvious candidates) had with building up a non-EA donor base? (By which I mean one which wouldn't otherwise donate to what'd generally be considered EA picks, like GiveWell recommendations, meta charities, etc.)
x
Is there an old Facebook or Forum thread where people describe how many people they've 'recruited' to EA (to some extent, and in some shape or form)?