Pronouns: she/her or they/them.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.
I write on Substack, and used to write on Medium.
I think the probability of anyone creating an artificial general intelligence before the end of 2035 is much less than 1 in 10,000 (or 0.01%). The chance of it happening in 2026 is virtually zero. A large majority of AI experts seems to disagree with the way the majority of the effective altruism community seems to think about AGI. Even some of the experts the community is fond of citing to create the impression of support for its views, such as Demis Hassabis and Ilya Sutskever, disagree in crucial ways, e.g. by emphasizing fundamental AI research more than scaling.
The effective altruism community has low standards for the quality of evidence about AGI timelines it is willing to accept. For instance, consider the AI 2027 report that 80,000 Hours spent $160,000 promoting in a YouTube video. Many crucial inputs to the model are simply the subjective, intuitive guesses of the authors. Therefore, the outputs of the model are largely the subjective, intuitive guesses of the authors. Apart from the inputs to the model, the model itself was dubiously constructed; the authors’ modelling decisions appear to have largely baked in the headline result from the outset. Flaws of a similar or greater magnitude can be found in other works frequently cited by the EA community.
This post uses aliens arriving on Earth as an analogy. However, Robin Hanson, whose views are influential in the EA community, seems to believe there’s a realistic chance aliens have already arrived on Earth. I’ve seen a few posts and comments indicating that a few people in the EA community apparently share his view on this, or a stronger version of it. To me, this is suggestive of the EA community employing an epistemic process that is too ready to accept fringe views on thin evidence.
My discussions with people in the EA community on AGI have startled me in this regard. Here’s an analogy. Let’s say someone is arguing for a controversial, minority view such as the Covid-19 lab leak hypothesis. You engage them in conversation, expecting them to have ready answers to any question or objection you can think of. You start off by asking, "Why do you think the novel coronavirus didn’t evolve naturally?" You then find out this person hadn’t previously considered that the novel coronavirus might have been able to evolve naturally, and wasn’t aware that this was a possibility. Rather than being able to fire back counterarguments chapter and verse, you discover that this person simply hasn’t thought about the elementary terms of the debate before.
Lest you think this is a ridiculously unfair and harsh analogy, let me quote Steven Byrnes describing his experience at an EA Global conference in the Bay Area in 2025:
There were a number of people, all quite new to the fields of AI and AI safety / alignment, for whom it seems to have never crossed their mind until they talked to me that maybe foundation models won’t scale to AGI, and likewise who didn’t seem to realize that the field of AI is broader than just foundation models.
To me, this is equivalent to advocating the lab leak hypothesis and not realizing that viruses can evolve naturally. It’s such a tremendous oversight that a reasonable person who is somewhat knowledgeable about AI could decide, at that point, that they’ve seen everything they need to see from the EA community on this topic, and that the EA community simply doesn’t know what it’s talking about, and hasn’t done its homework. My personal experience in trying to engage in discussions on AGI within the EA community has been much the same as what Steven Byrnes describes — some people mix up definitions and concepts, for instance, or misinterpret studies, or dismiss inconvenient expert opinions out of hand.
All this to say, I don’t find this post or the view it represents to be more credible than the view that possibly hostile aliens have a 10% chance of arriving on Earth this year. Proponents of fringe views on UFOs generally misunderstand ideas like optical illusions, perspective tricks, and other visual phenomena, as well as camera artifacts. Or they simply look at a blinking light in the sky (or a video of one) and conclude, non-credibly, "aliens" — not considering conventional aircraft or all the other things a blinking light could be. Analogously, the EA community generally disregards majority expert opinion and expert knowledge on AI, in favour of fringe views primarily promoted by people with no expertise, who at least on occasion have made elementary mistakes. Analogously to people who believe UFOs are alien spacecraft, trying to explain what experts know or believe that can cast doubt on the fringe view is met with considerable resistance. (Most people are, very reasonably and understandably, not willing to engage in discussions or debates with the EA community on this topic. It feels futile and exasperating, and at least a vocal minority in the community seems determined to discourage such engagement by making it as unpleasant as possible.)
I can only express empathy for people who have been misled into thinking there’s a very high chance of human extinction from AI in the very near future. I have to think holding such a belief is incredibly distressing. I don’t know if anything I can say will be reassuring. By the time you strongly hold such a belief, changing your mind on it might require turning your whole life and worldview upside-down. It might mean things like new friends, a new community, a new sense of identity or self-image, and so on. Just talking about the belief directly might not change anything, since the root causes of why one holds such a belief might be deeper than ordinary intellectual discussion can reach. I have a hunch, in fact, that it is similar with a lot of intellectual discussion on a lot of topics — there is a subterranean world of emotions, psychology, narratives, personal history, and personal identity entangled in the surface layer of ideas. However, fringe views of an eschatological, apocalyptic, or millennialist nature are an extreme case. In such cases, ordinary intellectual discussion seems especially unlikely to gain traction.
Thanks.
Unfortunately, patient philanthropy is the sort of topic where it seems like what a person thinks about it depends a lot on some combination of a) their intuitions about a few specific things and b) a few fundamental, worldview-level assumptions. I say "unfortunately" because this means disagreements are hard to meaningfully debate.
For instance, there are places where the argument either pro or con depends on what a particular number is, and since we don’t know what that number actually is and can’t find out, the best we can do is make something up. (For example, whether, in what way, and by how much foundations created today will decrease in efficacy over long timespans.)
Many people in the EA community are content to say, e.g., the chance of something is 0.5% rather than 0.05% or 0.005%, and rather than 5% or 50%, simply based on an intuition or intuitive judgment, and then make life-altering, aspirationally world-altering decisions based on that. My approach is more similar to the approach of mainstream academic publishing, in which if you can’t rigorously justify a number, you can’t use it in your argument — it isn’t admissible.
So, this is a deeper epistemological, philosophical, or methodological point.
One piece of evidence that supports my skepticism of numbers derived from intuition is a forecasting exercise where a minor difference in how the question was framed changed the number people gave by 5-6 orders of magnitude (750,000x). And that’s only one minor difference in framing. If different people disagree on multiple major, substantive considerations relevant to deriving a number, perhaps in some cases their numbers could differ by much more. If we can’t agree on whether a crucial number is a million times higher or lower, how constructive are such discussions going to be? Can we meaningfully say we are producing knowledge in such instances?
So, my preferred approach when an argument depends on an unknowable number is to stop the argument right there, or at least slow it down and proceed with caution. And the more of these numbers an argument depends on, the more I think the argument just can’t meaningfully support its conclusion, and, therefore, should not move us to think or act differently.
I’m only giving this topic a very cursory treatment, so I apologize for that.
I wrote this post quickly without much effort or research, and it’s just intended as a casual forum post, not anything approaching the level of an academic paper.
I’m not sure whether you’re content to make a narrow, technical, abstract point — that’s fine if so, but not what I intended to discuss here — or whether you’re trying to make a full argument that patient philanthropy is something we should actually do in practice. The latter sort of argument (which is what I wanted to address in this post) opens up a lot of considerations that the former does not.
There are many things that can’t be meaningfully modelled with real data, such as:
What’s the probability that patient philanthropy will be outlawed even in countries like England if patient philanthropic foundations try to use it to accumulate as much wealth and power as simple extrapolation implies? (My guess: ~100%.)
What’s the probability that patient philanthropy, if it’s not outlawed, would eventually contribute significantly to repugnant, evil outcomes like illiberalism, authoritarianism, plutocracy, oligarchy, and so on? (My guess: ~100%. So, patient philanthropy should be considered a catastrophic risk in any countries where it is adopted.)
What’s the risk of patient philanthropic foundations based in Western, developed countries like England holding money on behalf of recipients in developing countries such as in sub-Saharan Africa doing a worse job than if those same foundations or some equivalent or counterpart or substitute institution or intervention were based in the recipient countries? And with majority control by people from the recipient countries? (My guess: the risk is high enough that it’s preferable to move the money from the donor countries to the recipient countries from the outset.)
How much do we value things like freedom, autonomy, equality, empowerment, democracy, non-paternalism, and so on? How much do we value them on consequentialist grounds? Do we value them at all on non-consequentialist grounds? How does the importance of these considerations compare to the importance of other measures of impact such as the cost per life saved or the cost per QALY or DALY or similar measures? (My opinion: even just on consequentialist grounds alone, there are incredibly strong reasons to value these things, such that narrow cost-effectiveness calculations of the GiveWell style can’t hope to capture the full picture of what’s important.)
Under what assumptions about the future does the case for patient philanthropy break down? E.g., what do you have to assume about AGI or transformative AI? What do you have to assume about economic development in poor countries? Etc. (And how should we handle the uncertainty around this?)
What difference do philosophical assumptions make, such as a more deterministic view of history versus a view that places much greater emphasis on the agency, responsibility, and power of individuals and organizations? (My hunch: the latter makes certain arguments one might make for doing patient philanthropy in practice less attractive.)
These questions might all be irrelevant to what you want to say about patient philanthropy, but I think they are the sort of questions we have to consider if we are wondering about whether to actually do patient philanthropy in practice.
I was more hopeful when I wrote this post that it would be possible to talk meaningfully about patient philanthropy in a more narrow, technical, abstract way, but after discussing it with Jason and others, I realize that the possibility space is far too large to do that — we end up essentially discussing anything that anyone imagines might plausibly happen in the distant future, as well as fundamental differences in worldviews — and it’s impossible to avoid messier, less elegant arguments, including highly uncertain speculation about future scenarios, and including arguments of a philosophical, moral, social, and political nature.
I want to clarify I wasn’t trying to respond directly to your work or do it justice; rather, I was trying to address a more general question about whether we should actually do patient philanthropy in practice, all things considered. I cited you as the originator of patient philanthropy because it’s important to cite where ideas come from, but I hope I didn’t give readers the impression I was trying to represent your work well or give it a fair shake. I was not really doing that, I was just using it as a jumping-off point for a broader discussion. I apologize if I didn’t make that clear enough in the post, and could maybe edit it if that needs to be made clearer.
That’s an important point of clarification, thanks. I always appreciate your comments, Mr. Denkenberger.
There’s the idea of economic stimulus. John Maynard Keynes said that it would be better to spend stimulus money on useful projects (e.g. building houses), but as an intellectual provocation to illustrate his point, he said that if there were no better option, the government should pay people to dig holes in the ground and fill them back up again. Stimulating the economy is its own goal distinct from what the money actually gets spent to directly accomplish.
AI spending is an economic stimulus. Even if the data centres sit idle and never do anything economically valuable or useful — the equivalent of holes dug in the ground that were just filled up again — it could have a temporarily favourable effect on the economy and help prevent a recession. That seems like it’s probably been true so far. The U.S. economy looks recessarionary if you subtract the AI numbers.
However, we have to consider the counterfactual. If investors didn’t put all this money into AI, what would have happened? Of course, it’s hard to say. Maybe they just would have sat on their money, in which case the stimulus wouldn’t have happened, and maybe a recession would have begun by now. That’s possible. Alternatively, investors might have found a better use for their money, could have found more productive investments.
Regardless of what happens in the future, I don’t know if we’ll ever be able to know for sure what would have happened if there hadn’t been this AI investment craze. So, who knows.
(I think there are many things to invest in that would have been better choices than AI, but the question is whether, in a counterfactual scenario without the current AI exuberance, investors actually would have gone for any of them. Would they have invested enough in other things to stimulate the economy enough to avoid a recession?)
The stronger point, in my opinion, is that I don’t think anyone would actually defend spending on data centres just as an economic stimulus, rather than as an investment with an equal or better ROI as other investments. So, the general rule we all agree we want to follow is: invest in things with a good ROI, and don’t just dig and fill up holes for the sake of stimulus. Maybe there are cases where large investment bubbles prevent recessions, but no one would ever argue: hey, we should promote investment bubbles when growth is sluggish to prevent recessions! Even if there are one-off instances where that gambit pays off, statistically, overall, over the long term, that’s going to be a losing strategy.[1]
Only semi-relatedly, I’m fond of rule consequentialism as an alternative to act consequentialism. Leaving aside really technical and abstract considerations about which theory is better or more correct, I think, in practice, following the procedure 'follow the rule that will overall lead to the best consequences over the set of all acts' is a better idea than the procedure 'choose the act that will lead to the best consequences in this instance'. Given realistic ideas about humans actually think, feel, and behave in real life situations, I think the 'follow the rule' procedure tends to lead to better outcomes than the 'choose the act' procedure. The 'choose the act' procedure all too easily opens the door to motivated reasoning or just sloppy reasoning, and sometimes gives people, in their minds, a moral license to embrace evil or madness.
The necessary caveat: of course course, life is more complicated than either of these procedures allow, and there’s a lot of discernment that needs to be used on a case-by-case basis. (E.g., just individuating acts and categories of acts and deciding which rules apply to the situation you find yourself in is complicated enough. And there are rare, exceptional circumstances in which the normal rules might not make sense anymore.)
Whenever someone tries to justify something that seems crazy or wrong, like something deceptive, manipulative, or Machiavellian, on consequentialist grounds, which typically you only see in fiction, but you also see on rare occasions in real life (and unfortunately sometimes in mild forms in the EA community), I always see the same sort of flaws in the reasoning. The choice is typically presented as a false binary: e.g. spend $100 billion on AI data centres as an economic stimulus or do nothing.
This type of thinking overlooks that the number of possible options is almost always immensely large, and is mostly filled up by options you can’t currently imagine. People are creative and intelligent to the point of being unpredictable by you (or by anyone), so you simply can’t anticipate the alternative options that might arise if you don’t ram through your 'for the greater good' plan. But, anyway, that’s a big philosophical digression.
I typically don’t agree with much that Dwarkesh Patel, a popular podcaster, says about AI,[1] but his recent Substack post makes several incisive points, such as:
Somehow this automated researcher is going to figure out the algorithm for AGI - a problem humans have been banging their head against for the better part of a century - while not having the basic learning capabilities that children have? I find this super implausible.
Yes, exactly. The idea of a non-AGI AI researcher inventing AGI is a skyhook. It’s pulling yourself up by your bootstraps, a borderline supernatural idea. It’s retrocausal. It just doesn’t make sense.
There are more great points in the post besides that, such as:
Currently the labs are trying to bake in a bunch of skills into these models through “mid-training” - there’s an entire supply chain of companies building RL environments which teach the model how to navigate a web browser or use Excel to write financial models.
Either these models will soon learn on the job in a self directed way - making all this pre-baking pointless - or they won’t - which means AGI is not imminent. Humans don’t have to go through a special training phase where they need to rehearse every single piece of software they might ever need to use.
… You don’t need to pre-bake the consultant’s skills at crafting Powerpoint slides in order to automate Ilya [Sutskever, an AI researcher]. So clearly the labs’ actions hint at a world view where these models will continue to fare poorly at generalizing and on-the-job learning, thus making it necessary to build in the skills that they hope will be economically valuable.
And:
It is not possible to automate even a single job by just baking in some predefined set of skills, let alone all the jobs.
We are in an AI bubble, and AGI hype is totally misguided.
There are some important things I disagree with in Dwarkesh's post, too. For example, he says that AI has solved "general understanding, few shot learning, [and] reasoning", but AI has absolutely not solved any of those things.
Models lack general understanding, and the best way to see that is they can't do much useful in complex, real world contexts — which is one of the points Dwarkesh is making in the post. Few-shot learning only works well in situations where a model has already been trained on a giant amount of similar training examples. The "reasoning" in "reasoning models" is, in Melanie Mitchell's terminology, a wishful mnemonic. In other words, just naming an AI system something doesn't mean it can actually do the thing it's named after. If Meta renamed Llama 5 to Superintelligence 1, that wouldn't make Llama 5 a superintelligence.
I also think Dwarkesh is astronomically too optimistic about how economically impactful AI will be by 2030. And he's overfocusing on continual learning as the only research problem that needs to be solved, to the neglect of others.
Dwarkesh's point about the variance in the value of human labour and the O-ring theory in economics also doesn't seem to make sense, if I'm understanding his point correctly. If we had AI models that were genuinely as intelligent as the median human, the economic effects would be completely disruptive and transformative in much the way Dwarkesh describes earlier in the post. General intelligence at the level of the median human would be enough to automate a lot of knowledge work.
The idea that you need AI systems equivalent to the top percentile of humans in intelligence or skill or performance or whatever before you can start automating knowledge work doesn't make sense, since most knowledge workers aren't in the top percentile of humans. This is such an obvious point that I worry I'm just misunderstanding the point Dwarkesh was trying to make.
Good question. I’m less familiar with the self-driving car industry in China, but my understanding is that the story there has been the same as in the United States. Lots of hype, lots of demos, lots of big promises and goals, very little success. I don’t think plans count for anything at this point, since there’s been around 6-10 years of companies making ambitious plans that never materialized.
Regulation is not the barrier. The reason why self-driving cars aren’t a solved problem and aren’t close to being a solved problem is that current AI techniques aren’t up to the task; there are open problems in fundamental AI research that would need to be solved for self-driving to be solved. If governments can accelerate progress, it’s in funding fundamental AI research, not in making the rules on the road more lenient.
Seeing the amount of private capital wasted on generative AI has been painful. (OpenAI alone has raised about $80 billion and the total, global, cumulative investment in generative AI seems like it’s into the hundreds of billions.) It’s made me wonder what could have been accomplished if that money had been spent on fundamental AI research instead. Maybe instead of being wasted and possibly even nudging the U.S. slightly toward a recession (along with tariffs and all the rest), we would have gotten the kind of fundamental research progress needed for useful AI robots like self-driving cars.