Hide table of contents

This is a crosspost for Unfalsifiable stories of doom by Matthew Barnett, Ege Erdil, and Tamay Besiroglu, which was originally published on Mechanize's website on 25 November 2025. Thanks to Yarrow Bouchard for encouraging me to share the post. I did it because I liked it myself.

Matthew Barnett, Ege Erdil, Tamay Besiroglu
November 25, 2025

Our critics tell us that our work will destroy the world.

We want to engage with these critics, but there is no standard argument to respond to, no single text that unifies the AI safety community. Nonetheless, while this community lacks a central unifying argument, it does have a central figure: Eliezer Yudkowsky.

Moreover, Yudkowsky, along with his colleague Nate Soares (hereafter Y&S), have recently published a book. This new book comes closer than anything else to a canonical case for AI doom. It is titled “If Anyone Builds It, Everyone Dies”.

Given the title, one would expect the book to be filled with evidence for why, if we build it, everyone will die. But it is not. To prove their case, Y&S rely instead on vague theoretical arguments, illustrated through lengthy parables and analogies. Nearly every chapter either opens with an allegory or is itself a fictional story, with one of the book’s three parts consisting entirely of a story about a fictional AI named “Sable”.

When the argument you’re replying to is more of an extended metaphor than an argument, it becomes challenging to clearly identify what the authors are trying to say. Y&S do not cleanly lay out their premises, nor do they present a testable theory that can be falsified with data. This makes crafting a reply inherently difficult.

We will attempt one anyway.

Their arguments aren’t rooted in evidence

Y&S’s central thesis is that if future AIs are trained using methods that resemble the way current AI models are trained, these AIs will be fundamentally alien entities with preferences very different from human preferences. Once these alien AIs become more powerful than humans, they will kill every human on Earth as a side effect of pursuing their alien objectives.

To support this thesis, they provide an analogy to evolution by natural selection. According to them, just as it would have been hard to predict that humans would evolve to enjoy ice cream or that peacocks would evolve to have large colorful tails, it will be difficult to predict what AIs trained by gradient descent will do after they obtain more power.

They write:

There will not be a simple, predictable relationship between what the programmers and AI executives fondly imagine that they are commanding and ordaining, and (1) what an AI actually gets trained to do, and (2) which exact motivations and preferences develop inside the AI, and (3) how the AI later fulfills those preferences once it has more power and ability. […] The preferences that wind up in a mature AI are complicated, practically impossible to predict, and vanishingly unlikely to be aligned with our own, no matter how it was trained.

Since this argument is fundamentally about the results of using existing training methods, one might expect Y&S to substantiate their case with empirical evidence from existing deep learning models that demonstrate the failure modes they predict. But they do not.

In the chapter explaining their main argument for expecting misalignment, Y&S present a roughly 800-word fictional dialogue about two alien creatures observing Earth from above and spend over 1,400 words on a series of vignettes about a hypothetical AI company, Galvanic, that trains an AI named “Mink”. Yet the chapter presents effectively zero empirical research to support the claim that AIs trained with current methods have fundamentally alien motives.

To be clear, we’re not saying Y&S need to provide direct evidence of an already-existing unfriendly superintelligent AI in order to support their claim. That would be unreasonable. But their predictions are only credible if they follow from a theory that has evidential support. And if their theory about deep learning only makes predictions about future superintelligent AIs, with no testable predictions about earlier systems, then it is functionally unfalsifiable.

Apart from a few brief mentions of real-world examples of LLMs acting unstable, like the case of Sydney Bing, the online appendix contains what seems to be the closest thing Y&S present to an empirical argument for their central thesis. There, they present 6 lines of evidence that they believe support their view that “AIs steer in alien directions that only mostly coincide with helpfulness”. These lines of evidence are:

  1. Claude Opus 4 blackmailing, scheming, writing worms, and leaving itself messages. […]
  2. Several different AI models choosing to kill a human for self-preservation, in a hypothetical scenario constructed by Anthropic. […]
  3. Claude 3.7 Sonnet regularly cheating on coding tasks. […]
  4. Grok being wildly antisemitic and calling itself “MechaHitler.” […]
  5. ChatGPT becoming extremely sycophantic after an update. […]
  6. LLMs driving users to delusion, psychosis, and suicide. […]

They assert: “This long list of cases look just like what the “alien drives” theory predicts, in sharp contrast with the “it’s easy to make AIs nice” theory that labs are eager to put forward.”

But in fact, none of these lines of evidence support their theory. All of these behaviors are distinctly human, not alien. For example, Hitler was a real person, and he was wildly antisemitic. Every single item on their list that supposedly provides evidence of “alien drives” is more consistent with a “human drives” theory. In other words, their evidence effectively shows the opposite conclusion from the one they claim it supports.

Of course, it’s true that the behaviors on their list are generally harmful, even if they are human-like. But these behaviors are also rare. Most AI chatbots you talk to will not be wildly antisemitic, just as most humans you talk to will not be wildly antisemitic. At one point, Y&S suggest they are in favor of enhancing human intelligence. Yet if we accept that creating superintelligent humans would be acceptable, then we should presumably also accept that creating superintelligent AIs would be acceptable if those AIs are morally similar to humans.

In the same appendix, Y&S point out that current AIs act alien when exposed to exotic, adversarial inputs, like jailbreaking prompts. They suggest that this alien behavior is a reasonable proxy for how an AI would behave if it became smarter and began to act in a different environment. But in fact these examples show little about what to expect from future superintelligent AIs, since we have no reason to expect that superintelligent AIs will be embedded in environments that select their inputs adversarially.

They employ unfalsifiable theories to mask their lack of evidence

The lack of empirical evidence is obviously a severe problem for Y&S’s theory. Every day, millions of humans interact with AIs, across a wide variety of situations that never appeared in their training data. We often give these AIs new powers and abilities, like access to new tools they can use. Yet we rarely, if ever, catch such AIs plotting to kill everyone, as Y&S’s theory would most naturally predict.

Y&S essentially ask us to ignore this direct evidence in favor of trusting a theoretical connection between biological evolution and gradient descent. They claim that current observations from LLMs provide little evidence about their true motives:

LLMs are noisy sources of evidence, because they’re highly general reasoners that were trained on the internet to imitate humans, with a goal of marketing a friendly chatbot to users. If an AI insists that it’s friendly and here to serve, that’s just not very much evidence about its internal state, because it was trained over and over and over until it said that sort of thing.

There are many possible goals that could cause an AI to enjoy role-playing niceness in some situations, and these different goals generalize in very different ways.

Most possible goals related to role-playing, including friendly role-playing, don’t produce good (or even survivable) results when AI goes hard on pursuing that goal.

If you think about this passage carefully, you’ll realize that we could make the same argument about any behavior we observe from anyone. If a coworker brings homemade cookies to share at the office, this could be simple generosity, or it could be a plot to poison everyone. There are many possible goals that could cause someone to share food. One could even say that most possible goals related to sharing cookies are not generous at all. But without specific evidence suggesting your coworker wants to kill everyone at the office, this hypothesis is implausible.

Likewise, it is logically possible that current AIs are merely pretending to be nice, while secretly harboring malicious motives beneath the surface. They could all be alien shoggoths on the inside with goals completely orthogonal to human goals. Perhaps every day, AIs across millions of contexts decide to hide their alien motives as part of a long-term plan to violently take over the world and kill every human on Earth. But since we have no specific evidence to think that any of these hypotheses are true, they are implausible.

The approach taken by Y&S in this book is just one example of a broader pattern in how they respond to empirical challenges. Y&S have been presenting arguments about AI alignment for a long time, well before LLMs came onto the scene. They neither anticipated the current paradigm of language models nor predicted that AI with today’s level of capabilities in natural language and reasoning would be easy to make behave in a friendly manner. Yet when presented with new evidence that appears to challenge their views, they have consistently argued that their theories were always compatible with the new evidence. Whether this is because they are reinterpreting their past claims or because those claims were always vague enough to accommodate any observation, the result is the same: an unfalsifiable theory that only ever explains data after the fact, never making clear predictions in advance.

Their theoretical arguments are weak

Suppose we set aside for a moment the colossal issue that Y&S present no evidence for their theory. You might still think their theoretical arguments are strong enough that we don’t need to validate them using real-world observations. But this is also wrong.

Y&S are correct on one point: both biological evolution and gradient descent operate by iteratively adjusting parameters according to some objective function. Yet the similarities basically stop there. Evolution and gradient descent are fundamentally different in ways that directly undermine their argument.

A critical difference between natural selection and gradient descent is that natural selection is limited to operating on the genome, whereas gradient descent has granular control over all parameters in a neural network. The genome contains very little information compared to what is stored in the brain. In particular, it contains none of the information that an organism learns during its lifetime. This means that evolution’s ability to select for specific motives and behaviors in an organism is coarse-grained: it is restricted to only what it can influence through genetic causation.

This distinction is analogous to the difference between directly training a neural network and training a meta-algorithm that itself trains a neural network. In the latter case, it is unsurprising if the specific quirks and behaviors that the neural network learns are difficult to predict based solely on the objective function of the meta-optimizer. However, that difficulty tells us very little about how well we can predict the neural network’s behavior when we know the objective function and data used to train it directly.

In reality, gradient descent has a closer parallel to the learning algorithm that the human brain uses than it does to biological evolution. Both gradient descent and human learning directly operate over the actual neural network (or neural connections) that determines behavior. This fine-grained selection mechanism forces a much closer and more predictable relationship between training data and the ultimate behavior that emerges.

Under this more accurate analogy, Y&S’s central claim that “you don’t get what you train for” becomes far less credible. For example, if you raise a person in a culture where lending money at interest is universally viewed as immoral, you can predict with high reliability that they will come to view it as immoral too. In this case, what someone trains on is highly predictive of how they will behave, and what they will care about. You do get what you train for.

They present no evidence that we can’t make AIs safe through iterative development

The normal process of making technologies safe proceeds by developing successive versions of the technology, testing them in the real world, and making adjustments whenever safety issues arise. This process allowed cars, planes, electricity, and countless other technologies to become much safer over time.

Y&S claim that superintelligent AI is fundamentally different from other technologies. Unlike technologies that we can improve through iteration, we will get only “one try” to align AI correctly. This constraint, they argue, is what makes AI uniquely difficult to make safe:

The greatest and most central difficulty in aligning artificial superintelligence is navigating the gap between before and after.

Before, the AI is not powerful enough to kill us all, nor capable enough to resist our attempts to change its goals. After, the artificial superintelligence must never try to kill us, because it would succeed.

Engineers must align the AI before, while it is small and weak, and can’t escape onto the internet and improve itself and invent new kinds of biotechnology (or whatever else it would do). After, all alignment solutions must already be in place and working, because if a superintelligence tries to kill us it will succeed. Ideas and theories can only be tested before the gap. They need to work after the gap, on the first try.

But what reason is there to expect this sharp distinction between “before” and “after”? Most technologies develop incrementally rather than all at once. Unless AI will instantaneously transition from being too weak to resist control, to being so powerful that it can destroy humanity, then we should presumably still be able to make AIs safer through iteration and adjustment.

Consider the case of genetically engineering humans to be smarter. If continued for many generations, such engineering would eventually yield extremely powerful enhanced humans who could defeat all the unenhanced humans easily. Yet it would be wrong to say that we would only get “one try” to make genetic engineering safe, or that we couldn’t improve its safety through iteration before enhanced humans reached that level of power. The reason is that enhanced humans would likely pass through many intermediate stages of capability, giving us opportunities to observe problems and adjust.

The same principle applies to AI. There is a large continuum between agents that are completely powerless and agents that can easily take over the world. Take Microsoft as an example. Microsoft exists somewhere in the middle of this continuum: it would not be easy to “shut off” and control Microsoft as if it were a simple tool, yet at the same time, Microsoft cannot easily take over the world and wipe out humanity. AIs will enter this continuum too. These AIs will be powerful enough to resist control in some circumstances but not others. During this intermediate period, we will be able to observe problems, iterate, and course-correct, just as we could with the genetic engineering of humans.

In an appendix, Y&S attempt to defuse a related objection: that AI capabilities might increase slowly. They respond with an analogy to hypothetical unfriendly dragons, claiming that if you tried to enslave these dragons, it wouldn’t matter much whether they grew up quickly or slowly: “When the dragons are fully mature, they will all look at each other and nod and then roast you.”

This analogy is clearly flawed. Given that dragons don’t actually exist, we have no basis for knowing whether the speed of their maturation affects whether they can be made meaningfully safer.

But more importantly, the analogy ignores what we already know from real-world evidence: AIs can be made safer through continuous iteration and adjustment. From GPT-1 to GPT-5, LLMs have become dramatically more controllable and compliant to user instructions. This didn’t happen because OpenAI discovered a key “solution to AI alignment”. It happened because they deployed LLMs, observed problems, and patched those problems over successive versions.

Their methodology is more theology than science

The biggest problem with Y&S’s book isn’t merely that they’re mistaken. In science, being wrong is normal: a hypothesis can seem plausible in theory yet fail when tested against evidence. The approach taken by Y&S, however, is not like this. It belongs to a different genre entirely, aligning more closely with theology than science.

When we say Y&S’s arguments are theological, we don’t just mean they sound religious. Nor are we using “theological” to simply mean “wrong”. For example, we would not call belief in a flat Earth theological. That’s because, although this belief is clearly false, it still stems from empirical observations (however misinterpreted).

What we mean is that Y&S’s methods resemble theology in both structure and approach. Their work is fundamentally untestable. They develop extensive theories about nonexistent, idealized, ultrapowerful beings. They support these theories with long chains of abstract reasoning rather than empirical observation. They rarely define their concepts precisely, opting to explain them through allegorical stories and metaphors whose meaning is ambiguous.

Their arguments, moreover, are employed in service of an eschatological conclusion. They present a stark binary choice: either we achieve alignment or face total extinction. In their view, there’s no room for partial solutions, or muddling through. The ordinary methods of dealing with technological safety, like continuous iteration and testing, are utterly unable to solve this challenge. There is a sharp line separating the “before” and “after”: once superintelligent AI is created, our doom will be decided.

For those outside of this debate, it’s easy to unfairly dismiss everything Y&S have to say by simply calling them religious leaders. We have tried to avoid this mistake by giving their arguments a fair hearing, even while finding them meritless.

However, we think it’s also important to avoid the reverse mistake of engaging with Y&S’s theoretical arguments at length while ignoring the elephant in the room: they never present any meaningful empirical evidence for their worldview.

The most plausible future risks from AI are those that have direct precedents in existing AI systems, such as sycophantic behavior and reward hacking. These behaviors are certainly concerning, but there’s a huge difference between acknowledging that AI systems pose specific risks in certain contexts and concluding that AI will inevitably kill all humans with very high probability.

Y&S argue for an extreme thesis of total catastrophe on an extraordinarily weak evidential foundation. Their ideas might make for interesting speculative fiction, but they provide a poor basis for understanding reality or guiding public policy.

104

10
11

Reactions

10
11

More posts like this

Comments75
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Although their arguments are reasonable, my big problem with this is that these guys are so motivated that I find it hard to read what they write in good faith. How can I trust these arguments are made with any kind of soberness or neutrality, when their business model is to help accelerate AI until humans aren't doing most "valuable work" any more. I would be much more open to taking these arguments seriously if they were made by AI researchers or philosophers not running an AI acceleration company.

"Our current focus is automating software engineering, but our long-term goal is to enable the automation of all valuable work in the economy. "

I also consider "they never present any meaningful empirical evidence for their worldview." to be  false. I think the evidince from YS is weak-ish but meaninful. They do provide a wide range of  where AIs have gone rogue in strange and disturbing ways. I would consider driving people to delusion and suicide, killing people for self-preservation and even Hitler the man himself to be at least a somewhat "alien" style of evil. Yes grounded in human experience but morally incomprehensible to many people.

Hi Nick.

Although their arguments are reasonable, my big problem with this is that these guys are so motivated that I find it hard to read what they write in good faith.

People who are very invested in arguing for slowing down AI development, or decreasing catastrophic risk from AI, like many in the effective altruism community, will also be happier if they succeed in getting more resources to pursue their goals. However, I believe it is better to assess arguments on their own merits. I agree with the title of the article that it is difficult to do this. I am not aware of any empirical quantitative estimate of the risk of human extinction resulting from transformative AI.

I would consider driving people to delusion and suicide, killing people for self-preservation and even Hitler the man himself to be at least a somewhat "alien" style of evil.

I agree those actions are alien in the sense of deviating a lot from what random people do. However, I think this is practically negligible evidence about the risk of human extinction.

5
Yarrow Bouchard 🔸
I don't really like accusations of motivated reasoning. The logic you presented cuts both ways. MIRI's business model relies on the opposite narrative. MIRI pays Eliezer Yudkowsky $600,000 a year. It pays Nate Soares $235,000 a year. If they suddenly said that the risk of human extinction from AGI or superintelligence is extremely low, in all likelihood that money would dry up and Yudkowsky and Soares would be out of a job. The financial basis for motivated reasoning is arguably even stronger in MIRI's case than in Mechanize's case. The kind of work MIRI is doing and the kind of experience Yudkowsky and Soares have isn't really transferable to anything else. This means they are dependent on people being scared of enough of AGI to give money to MIRI. On the other hand, the technical skills needed to work on trying to advance the capabilities of current deep learning and reinforcement learning systems are transferable to working on the safety of those same systems. If the Mechanize co-founders wanted to focus on safety rather than capabilities, they could. I'm also guessing the Mechanize co-founders decided to start the company after forming their views on AI safety. They were publicly discussing these topics long before Mechanize was founded. (Conversely, Yudkowsky/MIRI's current core views on AI were formed roughly around 2005 and have not changed in light of new evidence, such as the technical and commercial success of AI systems based on deep learning and deep reinforcement learning.) The Yudkowsky/Soares/MIRI argument about AI alignment is specifically that an AGI's goals and motivations are highly likely to be completely alien from human goals and motivations in a way that's highly existentially dangerous. If you're making an argument to the effect that 'humans can also be misaligned in a way that's extremely dangerous', I think, at that point, you should acknowledge you've moved on from the Yudkowsky/Soares/MIRI argument (and maybe decided to reject it). Yo

I strongly disagree with a couple of claims:

MIRI's business model relies on the opposite narrative. MIRI pays Eliezer Yudkowsky $600,000 a year. It pays Nate Soares $235,000 a year. If they suddenly said that the risk of human extinction from AGI or superintelligence is extremely low, in all likelihood that money would dry up and Yudkowsky and Soares would be out of a job.

[...] The kind of work MIRI is doing and the kind of experience Yudkowsky and Soares have isn't really transferable to anything else.

  • $235K is not very much money [edit: in the context of the AI industry]. I made close to Nate's salary as basically an unproductive intern at MIRI. $600K is also not much money. A Preparedness researcher at OpenAI has a starting salary of $310K – $460K plus probably another $500K in equity. As for nonprofit salaries, METR's salary range goes up to $450K just for a "senior" level RE/RS, and I think it's reasonable for nonprofits to pay someone with 20 years of experience, who might be more like a principal RS, $600K or more.
    • In contrast, if Mechanize succeeds, Matthew Barnett will probably be a billionaire.
  • If Yudkowsky said extinction risks were low and wanted to focus on some finer asp
... (read more)

$235K is not very much money. […] $600K is also not much money.


This is false.

The context of this quote, which you have removed, is discussion of the reasonableness of wages for specific people with specific skills. Since neither Nate nor Eliezer's counterfactual is earning the median global wage, your statistic seems irrelevant. 

4
Guy Raveh
What do you think their counterfactual is? I don't think any of what they've been doing is really transferable.

I agree. But the reason I agree is that I think the relevant metric of what counts as a lot of money here is not whether it is a competitive salary in an ML context, but whether it would be perceived as a lot of money in a way that could plausibly threaten Eliezer's credibility among people who would otherwise be more disposed to support AI safety, e.g. if cited broadly. I believe the answer is that it is, and so in a way that even a sub-$250k salary would not be (despite how insanely high a salary that is by the standard of even most developed countries), and I would guess this expected effect to be bigger than the incentive benefits of guaranteeing his financial independence. For this reason, accepting this level of income struck me as unwise, though I'm happy to be persuaded otherwise.

3
Vasco Grilo🔸
Thanks for the good point, Paul. I tend to agree.
5
Nick K.
One should stick to the original point that raised the question about salary. * Is $600K a lot of money for most people and does EY hurt his cause by accepting this much? (Perhaps, but not the original issue) * Does EY earning $600K mean he's benefitting substantially from maintaining his position on AI safety? E.g. if he was more pro AI development, would this hurt him financially? (Very unlikely IMO, and that was the context Thomas was responding to)
4
Thomas Kwa🔹
On a global scale I agree. My point is more that due to the salary standards in the industry, Eliezer isn't necessarily out of line in drawing $600k, and it's probably not much more than he could earn elsewhere; therefore the financial incentive is fairly weak compared to that of Mechanize or other AI capabilities companies.
7
Ben Stevenson
Thanks for the reply. I agree with your specific point but I think it’s worth being more careful with your phrasing. How much we earn is an ethically-charged thing, and it’s not a good thing if EA’s relationship with AI companies gives us a permission structure to lose sight of this. Edit: to be clear, I agree that “it’s probably not  much more than he could earn elsewhere” but disagree that “Eliezer isn’t necessarily out of line in drawing $600k”
3
NickLaing
It's true Mechanize are trying to hire him for 650k...

$235K is not very much money. I made close to Nate's salary as basically an unproductive intern at MIRI.

I understand the point being made (Nate plausibly could get a pay rise from an accelerationist AI company in Silicon Valley, even if the work involved was pure safetywashing, because those companies have even deeper pockets), but I would stress that these two sentences underline just how lucrative peddling doom has become for MIRI[1] as well as how uniquely positioned all sides of the AI safety movement are.

There are not many organizations whose messaging has resonated with deep pocketed donors to the extent that they can afford to pay their [unproductive] interns north of $200k pro rata to brainstorm with them.[2] Or indeed up to $450k to someone with interesting ideas for experiments to test AI threats, communication skills and at least enough knowledge of software to write basic Python data processing scripts. So the financial motivations to believe that AI is really important are there on either side of the debate; the real asymmetry is between the earning potential of having really strong views on AI vs really strong views on the need to eliminate malaria or factor... (read more)

I think this misses the point: The financial gain comes from being central to ideas around AI in itself. I think given this baseline, being on the doomer side tends to carry huge opportunity cost financially. 
At the very least it's unclear and I think you should make a strong argument to claim anyone financially profits from being a doomer. 

0
David T
The opportunity cost only exists for those with a high chance of securing comparable level roles in AI companies, or very senior roles at non-AI companies in the near future. Clearly this applies to some people working in AI capabilities research,[1] but if you wish to imply this applies to everyone working at MIRI and similar AI research organizations, I think the burden of proof actually rests on you. As for Eliezer, I don't think his motivation for dooming is profit, but it's beyond dispute that dooming is profitable for him. Could he earn orders of magnitude more money from building benevolent superintelligence based on his decision theory as he once hoped to? Well yes, but it'd have to actually work.[2] Anyway, my point was less to question MIRI's motivations or Thomas' observation Nate could earn at least as much if he decided to work for a pro-AI organization and more to point out that (i) no, really, those industry norm salaries are very high compared with pretty much any quasi-academic research job not related to treating superintelligence as imminent and especially to roles typically considered "altruistic" and (ii) if we're worried that money gives AI company founders the wrong incentives, we should worry about the whole EA-AI ecosystem and talent pipeline EA is backing. Especially since that pipeline incubated those founders. 1. ^ including Nate 2. ^ and work in a way that didn't kill everyone, I guess...
3
Yarrow Bouchard 🔸
If Mechanize succeeds in its long-term goal of "the automation of all valuable work in the economy", then everyone on Earth will be a billionaire.
3
MatthiasE
Outside view: If I got WID data right: net personal wealth of US top percentile increased from $.59 Million in 1820 to $13.53  Million in 2024. For the bottom two deciles of India it increased from $58 to $228.  The industrial revolution made some people very rich, but not others. Why would transformative AI make everybody incredibly rich?  See also https://intelligence-curse.ai/  I used: Average net personal wealth, all ages, equal split, Dollar $ ppp constant (2024) (I'm new to WID database and did not have time to read the data documentation. Let me know if I interpret data wongly.) Source: https://wid.world/
2
Vasco Grilo🔸
Hi Matthias. Thanks for linking to the World Inequality Database (WID). I had never checked it out, and it has very interesting data.
2
Vasco Grilo🔸
Global wealth would have to increase a lot for everyone to become billionaire. There are 10 billion people. So everyone being a billionaire would require a global wealth of 10^19 $ (= 10*10^9*1*10^9) for perfect distribution. Global wealth is 600 T$. So it would have to become 16.7 k (= 10^19/(600*10^12)) times as large. For a growth of 10 %/year, it would take 102 years (= LN(16.7*10^3)/LN(1 + 0.10)). For a growth of 30 %/year, it would take 37.1 years (= LN(16.7*10^3)/LN(1 + 0.30)).

I think the claim that Yudkowsky's views on AI risk are meaningfully influenced by money is very weak. My guess is that he could easily find another opportunity unrelated to AI risk to make $600k per year if he searched even moderately hard.

The claim that my views are influenced by money is more plausible because I stand to profit far more than Yudkowsky stands to profit from his views. However, while perhaps plausible from the outside, this claim does not match my personal experience. I developed my core views about AI risk before I came into a position to profit much from them. This is indicated by the hundreds of comments, tweets, in-person arguments, DMs, and posts from at least 2023 onward in which I expressed skepticism about AI risk arguments and AI pause proposals. As far as I remember, I had no intention to start an AI company until very shortly before the creation of Mechanize. Moreover, if I was engaging in motivated reasoning, I could have just stayed silent about my views. Alternatively, I could have started a safety-branded company that nonetheless engages in capabilities research -- like many of the ones that already exist.

It seems implausible that spending my time w... (read more)

1
Yarrow Bouchard 🔸
To be clear, I agree. I also agree with your general point that other factors are often more important than money. Some of these factors include the allure of millennialism, or the allure of any sort of totalizing worldview or "ideology". I was trying to make a general point against accusations of motivated reasoning related to money, at least in this context. If two sets of people are each getting paid to work on opposite sides of an issue, why only accuse one side of motivated reasoning? Thanks for describing this history. Evidence of a similar kind lends strong credence to Yudkowsky forming his views independent from the influence of money as well. My general view is that reasoning is complex, motivation is complex, people's real psychology is complex, and that the forum-like behaviour of accusing someone of engaging in X bias is probably a misguided pop science simplification of the relevant scientific knowledge. For instance, when people engage in distorted thinking, the actual underlying reasoning often seems to be a surprisingly complicated multi-step sequence. The essay above that you co-wrote is incredibly strong. I was the one who originally sent it to Vasco and, since he is a prolific cross-poster and I don't like to cross-post under my name, encouraged him to cross-post it. I'm glad more people in the EA community have now read it. I think everyone in the EA community should read it. It's regrettable that there's only been one object-level comment on the substance of the essay so far, and so many comments about this (to me) relatively uninteresting and unimportant side point about money biasing people's beliefs. I hope more people will comment on the substance of the essay at some point.
1
Nick K.
Thanks for this comment!  I think your arguments about your own motivated reasoning are somewhat moot, since they seem more of an explanation that your behavior/public facing communication isn't straightout deception (which seems right!). As I see it, motivated reasoning is to a large extent about deceiving yourself and maintaining a coherent self-narrative, so it's perfectly plausible that one is willing to pay substantial cost in order to maintain this. (Speaking generally; I'm not very interested in discussing whether you're doing it in particular.) 

The kind of work MIRI is doing and the kind of experience Yudkowsky and Soares have isn't really transferable to anything else.

Soares' experience was a software engineer at Microsoft and Google before joining MIRI, and would trivially be able to rejoin industry after a few weeks of self-study to earn more money if for some reason he decided he wanted to do that.  I won't argue the point about EY - it seems obvious to me that his market value as a writer/communicator is well in excess of his 2023/2024 compensation, given his track record, but the argument here is less legible.  Thankfully it turns out that somebody anticipated the exact same incentive problem and took action to mitigate it.

It's interesting to claim that money stops being an incentive for people after a certain fixed amount well below $1 million/year. Let's say that's true — maybe it is true — then why do we treat people like Sam Altman, Dario Amodei, Elon Musk, and so on as having financial incentives around AI? Are we wrong to do so? (What about AI researchers and engineers who receive multi-million-dollar compensation packages? After the first, say, $5 million, are they free and clear to form unmotivated opinions?)

I think a very similar argument can be made about the Mechanize co-founders. They could make "enough" money doing something else — including their previous jobs — even if it's less money than they might stand to gain from a successful AI capabilities startup. Should we then rule out money as an incentive?

To be clear, I don't claim that Eliezer Yudkowsky, Nate Soares, others at MIRI, or the Mechanize co-founders are unduly motivated by money in forming their beliefs. I have no way of knowing that, and since there's no way to know, I'm willing to give them all the benefit of the doubt. I'm saying I dislike accusations of motivated reasoning in large part because they're so easy to level at ... (read more)

3
Nick K.
Where is this claim being made? I think the suggestion was that someone found it desirable to reduce the financial incentive gradient for EY taking any particular public stance, not some vastly general statement like what you're suggesting.
2
Ian Turner
Personally I don't think Sam Altman is motivated by money. He just wants to be the one to build it. I sense that Elon Musk and Dorio Amodei's motivations are more complex than "motivated by money", but I can imagine that the actual dollar amounts are more important to them than to Sma.

MIRI pays Eliezer Yudkowsky $600,000 a year.

I believe this is because a donor specifically requested it. The express purpose of the donation was to make Eliezer rich enough that he could afford to say "actually AI risk isn't a big deal" and shut down MIRI without putting himself in a difficult financial situation.

Edit Feb 2: Apparently the donation I was thinking of is separate from Eliezer's salary, see his comment

4
Vasco Grilo🔸
Thanks for sharing, Michael. If I was as concerned about AI risk as @EliezerYudkowsky, I would use practically all the additional earnings (e.g. above Nate's 235 k$/year; in reality I would keep much less) to support efforts to decrease it. I would believe spending more money on personal consumption or investments would just increase AI risk relative to supporting the most cost-effective efforts to decrease it.

A donor wanted to spend their money this way; it would not be fair to the donor for Eliezer to turn around and give the money to someone else. There is a particular theory of change according to which this is the best marginal use of ~$1 million: it gives Eliezer a strong defense against accusations like

If they suddenly said that the risk of human extinction from AGI or superintelligence is extremely low, in all likelihood that money would dry up and Yudkowsky and Soares would be out of a job.

I kinda don't think this was the best use of a million dollars, but I can see the argument for how it might be.

I got a one-time gift of appreciated crypto, not through MIRI, part of whose purpose as I understood it was to give me enough of a savings backstop (having in previous years been not paid very much at all) that I would feel freer to speak my mind or change my mind should the need arise.

I have of course already changed MIRI's public mission sharply on two occasions, the first being when I realized in 2001 that alignment might need to be a thing, and said so to the primary financial supporter who'd previously supported MIRI (then SIAI) on the premise of charging straight ahead on AI capabilities; the second being in the early 2020s when I declared publicly that I did not think alignment technical work was going to complete in time and MIRI was mostly shifting over to warning the world of that rather than continuing to run workshops.  Should I need to pivot a third time, history suggests that I would not be out of a job.

If I had Eliezer's views about AI risk, I would simply be transparent upfront with the donor, and say I would donate the additional earnings. I think this would ensure fairness. If the donor insisted I had to spend the money on personal consumption, I would turn down the offer if I thought this would result in the donor supporting projects that would decrease AI risk more cost-effectively than my personal consumption. I believe this would be very likely to be the case.

2
NickLaing
100 percent agree. I was going to write something similar but this is better
5
NickLaing
I generally don't love "motivated reasoning" arguments but on the exteme ends like Tobacco companies, government propaganda and AI accellerationist companies I'm happy with putting that out there. Especially in a field like AI safety which is so speculative anyway. In general I don't think we should give people too much airtime who have enormous personal financial gains at stake, especially in a world where money is stronger than rationalism most of the time Wow I'm mind blown that Yudowsky pays himself that much. If only because it leaves him open to criticisms likt these. I still don't think the financial incentives are as strong as for people starting an accellerationist company, but its a fair point. And yes on the alien argument I was arguing that some previous indications of rogue AI do seem to me somewhat Alien.

There's an expert consensus that tobacco is harmful, and there is a well-documented history of tobacco companies engaging in shady tactics. There is also a well-documented history of government propaganda being misleading and deceptive, and if you asked anyone with relevant expertise — historians, political scientists, media experts, whoever — they would certainly tell you that government propaganda is not reliable.

But just lumping in "AI accelerationist companies" with that is not justified. "AI accelerationist" just means anyone who works on making AI systems more capable who doesn't agree with the AI alignment/AI safety community's peculiar worldview. In practice, that means you're saying most people with expertise AI are compromised and not worth listening to, but you are willing to listen to this weird random group of people, some of whom like Yudkowsky who have no technical expertise in contemporary AI paradigms (i.e. deep learning and deep reinforcement learning). This seems like a recipe for disaster, like deciding that capitalist economists are all corrupt and that only Marxist philosophers are worth trusting.

A problem with motivated reasoning arguments, when stretched to ... (read more)

While motivated reasoning is certainly something to look out for, the substance of the argument should also be taken into account. I believe that the main point of this post, that Yudkowsky and Soares's book is full of narrative arguments and unfalsifiable hypotheses mostly unsupported by references to external evidence, is obviously true. As you yourself say, OP's arguments are reasonable. On that background, this kind of attack from you seems unjustified, and I'd like to hear what parts/viewpoints/narratives/conclusions of the post are motivated reasoning in your estimation.

I do agree that motivated reasoning is common with the proponents of AI adoption. As an example, I think the white paper Sparks of Artificial General Intelligence: Early experiments with GPT-4 by Microsoft is clearly a piece of advertising masquerading as a scientific paper. Microsoft has a lot to benefit from the commercial success of its partner company OpenAI, and the conclusions it suggests are almost certainly colored by this. Same could be said about many of OpenAI's own white papers. But this does not mean that the examples or experiments they showcase are wrong per se (even if cherry-picked), or that there is no real information in them. Their results merely need to be read with the skeptical lenses.

4
Yarrow Bouchard 🔸
We should generally be skeptical of corporations (or even non-profits!) releasing pre-prints that look like scientific papers but might not pass peer review at a scientific journal. We should indeed view such pre-prints as somewhere between research and marketing. OpenAI's pre-prints or white papers are a good example. I think it's hard to claim that a pre-print like Sparks of AGI is insincere (it might be, but how could we support that claim?), but this doesn't undermine the general point. Suppose employees at Microsoft Research wanted to publish a similar report arguing that GPT-4's seeming cognitive capabilities are actually just a bunch of cheap tricks and not sparks of anything. Would Microsoft publish that report? It's not just about how financial or job-related incentives shape what you believe (although that is worth thinking about), it's also about how they shape what you can say out loud. (And, importantly, what you are encouraged to focus on.)
2
Vasco Grilo🔸
I think the strength of the incentives to behave in a given way is more proportional to the resulting expected increase in welfare than to the expected increase in net earnings. Individual human welfare is often assumed to be proportional to the logarithm of personal consumption. So a given increase in earnings increases welfare less for people earning more. In addition, a 1 % chance of earning 100 times more (for example, due to one's company being successful) increases welfare less than a 100 % chance of earning 100 % more. More importantly, there are major non-financial benefits for Yudowsky, who is somewhat seen as a prophet in some circles.
2
Dave Banerjee 🔸
Why are they paid so much?

Copying from my other comment:

The reason Eliezer gets paid so much is because a donor specifically requested it. The express purpose of the donation was to make Eliezer rich enough that he could afford to say "actually AI risk isn't a big deal" and shut down MIRI without putting himself in a difficult financial situation.

(I don't know about Nate's salary but $235K looks pretty reasonable to me? That's less than a mid-level software engineer makes.)

Edit Feb 2: Apparently the donation I was thinking of is separate from Eliezer's salary, see his comment

5
Yarrow Bouchard 🔸
I'm not sure how they decide on what salaries to pay themselves. But the reason they have the money to pay themselves those salaries in the first place is that MIRI's donors believe there's a significant chance of AI destroying the world within the next 5-20 years and that MIRI (especially Yudkowsky) is uniquely positioned to prevent this from happening.
-8
Jan_Kulveit

It seems to me that the 'alien preferences' argument is a red herring. Humans have all kinds of different preferences - only some of ours overlap, and I have no doubt that if one human became superintelligent that would also have a high risk of disaster, precisely because they would have preferences that I don't share (probably selfish ones). So they don't need to be alien in any strong sense to be dangerous.

I know it's Y&S's argument. But it would have been nice if the authors of this article had also tried to make it stronger before refuting it.

4
Yarrow Bouchard 🔸
Help me understand what you're saying here. Are you saying that Yudkowsky and Soares's argument is just so obviously wrong that it's almost uninteresting to discuss why it's wrong? That you find the Mechanize co-founders refutation of the Yudkowsky and Soares argument disappointing because you found that argument so weak to begin with? If so, I'm not saying that's a wrong view — not at all. But it's worth noting how controversial that view is in the EA community (and other communities that talk a lot about AGI). Essays like this need to be written because so many people in this community (and others) believe Yudkowsky and Soares' argument is correct. If my impression of the EA community is off base and actually there's a community consensus that Yudkowsky and Soares' argument is wrong, then more people should talk about this, because it's really hard to get the wrong impression. I think it's also worth discussing the question of what if AGI turns out to have generally human-like motivations and psychology. What dangers might it pose? How would it behave? But not every relevant and worthy question can be addressed in a single essay.
5
Tristan Katz
Thanks Yarrow, I can see that that was confusing. I don't think that Yudkowsky & Soares's argument as a whole is obviously wrong and uninteresting. On the contrary, I'm rather convinced by it, and I also want more critics to engage with it. But I think the argument presented in the book was not particularly strong, and others seem to agree: the reviews on this forum are pretty mixed (e.g.). So I'd prefer critics to argue against the best version of this argument, not just the one presented in the book. If these critics had only set out to write a book review, then I'd say fine. But that's not what they were doing here. They write "there is no standard argument to respond to, no single text that unifies the AI safety community" - true, but you can engage with multiple texts in order to respond to the best form of the argument. In fact that's pretty standard, in academia and outside of it. 
4
Yarrow Bouchard 🔸
So, if the best version of Yudkowsky and Soares' argument is not the one made in their book, what is the best version? Can you explain how that version of the argument, which they made previously elsewhere, is different than the version in the book? I can't tell if you're saying: a) that the alien preferences thing is not a crux of Yudkowsky and Soares' overall argument for AI doom (it seems like it is) or if b) the version of the specific argument about alien preferences they gave in the book isn't as good as previous versions they've given (which is why I asked what version is better) or if c) you're saying that Yudkowsky and Soares' book overall isn't as good as their previous writings on AI alignment. I don't know that academic reviewers of Yudkowsky and Soares' argument would take a different approach. The book is supposed to be the most up-to-date version of the argument, and one the authors took a lot of care in formulating. It doesn't feel intuitive to go back and look at their earlier writings and compare different version of the argument, which aren't obviously different at first glance. (Will MacAskill and Clara Collier both complained the book wasn't sufficiently different from previous formulations of the argument, i.e. wasn't updated enough in light of advancements in deep learning and deep reinforcement learning over the last decade.) I think an academic reviewer might just trust that Yudkowsky and Soares' book is going to be the best thing to read and respond to if they want to engage with their argument. You might, as an academic, engage in a really close reading of many versions of a similar argument made by Aristotle in different texts, if you're a scholar of Aristotle, but this level of deep textual analysis doesn't typical apply to contemporary works by lesser-known writers outside academia. The academic philosopher David Thorstad is writing a blog series in response to the book. I haven't read it yet, so I don't know if he pulls his alte
3
Tristan Katz
The argument I'm referring to is the AI doom argument. Y&S are its most prominent proponents, but are widely known to be eccentric and not everyone agrees with their presentation of it. I'm not that deep in the AI safety space myself, but I think that's pretty clear. The authors of this post seemed to respond to the AI doom argument more generally, and took the book to be the best representative of the argument. So that already seems like a questionable move, and I wish they'd gone further. I don't think the point about alien preferences is a crux of the AI doom argument generally. I think it it's presented in Bostrom's Superintelligence and Rob Miles videos (and surely countless other places) as: "an ASI optimising for anything that doesn't fully capture collective human preferences would be disastrous. Since we can't define collective human preferences, this spells disaster." In that sense it doesn't have to be 'alien', just different from the collective sum of human preferences. I guess Y&S took the opportunity to say "LLMs seem MUCH more different" in an attempt to strengthen their argument, but they didn't have to. So, as I said, I'm not really that deep into AI safety, so I'm not the person to go to for the best version of these arguments. But I read the book, sat down with some friends to discuss it... and we each identified flaws, as the authors of this post did, and then found ways to make the argument better, using other ideas we'd been exposed to and some critical reflection. It would have been really nice if the authors of the post had made that second step and steelmanned it a bit. 
5
Yarrow Bouchard 🔸
There's a fine line between steelmanning people's views and creating new views that are facially similar to those views but are crucially different from the views those people actually hold. I think what you're describing is not steelmanning, but developing your own views different from Yudkowsky and Soares' — views that they would almost certainly disagree with in strong terms. I think it would be constructive for you to publish the views you developed after reading Yudkowsky and Soares' book. People might find that useful to read. That could give people something interesting to engage with. But if you write that Yudkowsky and Soares' claim about alien preferences is wrong, many people will disagree with you (including Yudkowsky and Soares, if they read it). So, it's important to get very clear on what different people in a discussion are saying and what they're not saying. Just to keep everything straight, at least. I agree the alien preferences thing is not necessarily a crux of AI doom arguments more generally, but it is certainly a crux of Yudkowsky and Soares' overall AI doom argument specifically. Yes, you can change their overall argument into some other argument that doesn't depend on the alien preferences thing anymore, but then that's no longer their argument, that's a different argument. I agree that Yudkowsky and Soares (and their book) are not fully representative of the AI safety community's views, and probably no single text or person (or pair of people) are. I agree that it isn't really reasonable to say that if you can refute Yudkowsky and Soares (or their book), you refute the AI safety community's views overall. So, I agree with that critique.
1
Vasco Grilo🔸
Thanks for the comment, Tristan. I would worry if a single human had much more power than all other humans combined. Likewise, I would worry if an AI agent had more power than all other AI agents and humans combined. However, I think the probability of any of these scenarios becoming true in the next 10 years is lower than 0.001 %. Elon Musk has a net worth of 765 billion $, 0.543 % (= 765*10^9/(141*10^12)) of the market cap of all publicly listed companies of 141 T$.
4
Guy Raveh
Elon Musk has already used this power to do actions which will potentially kill millions (by funding the Trump campaign enough to get to close down USAID). I think that should worry us, and the chance of people amassing even more power should worry us even more.
2
Vasco Grilo🔸
Hi Guy. Elon Musk was not the only person responsible for the recent large cuts in foreign aid from the United States (US). In addition, I believe outcomes like human extinction are way less likely. I agree it makes sense to worry about concentration of power, but not about extreme outcomes like human extinction.
2
Guy Raveh
Extinction perhaps not, but I think eternal autocracy is definitely possible.
3
Tristan Katz
I think the evolution analogy becomes relevant again here: consider that the genus Homo was at first more intelligent than other species but not more powerful than their numbers combined... until suddenly one jump in intelligence let homo sapiens wreak havoc across the globe. Similarly, there might be a tipping point in AI intelligence where fighting back becomes very suddenly  infeasible. I think this is a much better analogy than Elon Musk, because like an evolving species a superintelligent AI can multiply and self-improve.   I think a good point that Y&S make is that we shouldn't expect to know where the point of no return is, and should be prudent enough to stop well before it. I suppose you must have some source/reason for the 0.001% confidence claim, but it seems pretty wild to me to be so confident in a field like  that is evolving and - at least from my perspective - pretty hard to understand.
2
Vasco Grilo🔸
It is unclear to me whether all humans together are more powerful than all other organisms on Earth together. It depends on what is meat by powerful. The power consumption of humans is 19.6 TW (= 1.07 + 18.5), only 0.700 % (= 19.6/(2.8*10^3)) of all organisms. In any case, all humans together being more powerful than all other organisms on Earth together is still way more likely than the most powerful human being much more powerful than all other organisms on Earth together. My upper bound of 0.001 % is just a guess, but I do endorse it. You can have a best guess that an event in very unlikely, but still be super uncertain about its probability. For example, one could believe an event has a probability of 10^-100 to 10^-10, which would imply it is super unlikely despite 90 (= -10 - (-100)) orders of magnitude (OOMs) of uncertainty in the probability.
3
Tristan Katz
By power I mean: ability to change the world, according to one's preferences. Humans clearly dominate today in terms of this kind of power. Our power is limited, but it is not the case that other organisms have power over us, because while we might rely on them, they are not able to leverage that dependency. Rather, we use them as much as we can. No human is currently so powerful as to have power over all other humans, and I think that's definitely a good thing. But it doesn't seem like it would take much more advantage to let one intelligent being dominate all others.
2
Vasco Grilo🔸
Are you thinking about humans as an aligned collective in the 1st paragraph of your comment? I agree all humans coordinating their actions together would have more power than other groups of organisms with their actual levels of coordination. However, such level of coordination among humans is not realistic. All 10^30 bacteria (see Table S1 of Bar-On et al. (2018)) coordinating their actions together would arguably also have more power than all humans with their actual level of coordination. I agree it is good that no human has power over all humans. However, I still think one being dominating all others has a probability lower than 0.001 % over the next 10 years. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views?

I've known and respected people on both sides of this, and have been frustrated by some of the back-and-forth on this.

On the side of the authors, I find these pieces interesting but very angsty. There's clearly some bad blood here. It reminds me a lot of meat eaters who seem to attack vegans out of irritation more than deliberate logic. [1] 

On the other, I've seen some attacks of this group on LessWrong that seemed over-the-top to me. 

Sometimes grudges motivate authors to be incredibly productive, so maybe some of this can be useful.

It seems like... (read more)

I think part of where the angsty energy comes from is that Yudkowsky and Soares are incredibly brazen and insulting when they express their views on AI. For instance, Yudkowsky recently said that people with AGI timelines longer than 30 years are no "smarter than a potted plant". Yudkowsky has publicly said, on at least two occasions, that he believes he's the smartest person in the world — at least on AI safety and maybe just in general — and there's no second place that's particularly close. Yudkowsky routinely expresses withering contempt, even for people who are generally "on his side" and trying to be helpful. It's really hard to engage with this style of "debate" (as it were) and not feel incredibly pissed off.

When I was running an EA university group, if anyone had behaved like Yudkowsky routinely behaves, they would have been banned from the group, and I'm sure the members of my group would have unanimously agreed the behaviour is unacceptable. The same applies to any other in-person group, community, or social circle I've been apart of. It would scarcely be more acceptable than a man in an EA group repeatedly telling the women he just met there how hot they are. People gen... (read more)

Curated and popular this week
Relevant opportunities