Yarrow Bouchard 🔸

1350 karmaJoined Canadastrangecosmos.substack.com

Bio

Pronouns: she/her or they/them. 

Parody of Stewart Brand’s whole Earth button.

I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.

I write on Substack, and used to write on Medium.

Sequences
2

Criticism of specific accounts of imminent AGI
Skepticism about near-term AGI

Comments
659

Topic contributions
3

Other useful terms that mean the same or similar things to theory of mind:

A term that means something similar to theory of mind but something dissimilar to how you are using the term here (and how the term is most often used):

Theory of mind, mentalization, cognitive empathy, and perspective taking are, of course, not actually "rare" but are what almost all people are doing almost all the time. The interesting question is what kinds of failures you think are common. The more opinionated you are about this, and the more you diverge from consensus opinions of experts such as psychologists and researchers in social work, the more likely you are to be wrong.

Whether people are correctly mentalizing or perspective taking or engaging in accurate cognitive empathy is often a controversial and contested question. These disagreements can't be resolved simply by invoking the concept of theory of mind (or a similar term or concept). 

For example, is misogyny or sexism a form of hatred? And if a person or group is taken to have misogynist or sexist views, is it accurate to say that person or group hates women? Are people who make such claims mentalizing incorrectly by misdiagnosing misogyny as hatred of women, or are you mentalizing incorrectly by misdiagnosing their diagnosis as incorrect mentalization? I don't think disputes like this can be resolved by just making uncontroversial assertions about what theory of mind is. And if you're using contested examples like this as the paradigmatic examples upon which the rest of your exploration is built, then your treatment of the topic is probably going to end up assuming its conclusions — and failing to persuade anybody who didn't already accept those conclusions from the outset. 

I'm not sure the concept of net present value meaningfully tells us anything new about world hunger or global poverty. The connection between the concepts net present value and world hunger is very loose. It is true that there are lots of people (including random pseudonymous people on Twitter) who don't understand important concepts in accounting, finance, and economics. Failure to understand these concepts may lead to bad analysis. But development economists obviously understand these concepts, so if the point is to understand world hunger or global poverty, it would be a better idea to just read an introductory text on international development than to think further about how the concept of net present value might or might not shed new light on global poverty.

I personally don't find any value in Grice's maxims. There is a danger in being too general, too abstract, and too vague in the advice you give, such as that it comes close to boiling down to 'do good things and don't do bad things'. Or in saying things that are so obvious, such as 'say true things and don't say untrue things', that the advice is pointless, since everybody already knows that.

I find the slogan "ideas matter" to be unremarkable in a similar way as 'say true things and don't say untrue things'. I don't think anybody disagrees that ideas matter. I would say everyone agrees with that. 

If someone were presenting to me the thesis "ideas matter" and it were a somewhat novel or interesting thesis, I would expect it to be something along the lines of looking at ideas in history that had a surprisingly large impact. For example, I recently watched a fascinating interview about the historical importance of textiles. I was surprised by so many things in that interview, I learned a lot. That video definitely made me think textiles mattered a lot more than I realized. It supported the thesis "ideas in textile innovation matter". What would a case for the thesis "ideas matter" look like? Maybe something like that, but more general. However, I think it's so intuitive and widely believed that science, technology, politics, religion, and scholarship are important, it would be hard to present a case that is surprising or novel enough to make most people think that "ideas matter" non-trivially more than they already did.

Overall theme of this comment:

It's hard to innovate beyond the state of the art, and it's easy to overstate how novel one's own insights are, or to overstate how well-supported one's controversial opinions are. 

That isn't a reason not to explore or celebrate interesting ideas, of course. But it is a reason to change certain aspects of the presentation, such as acknowledging that theory of mind is ubiquitous, not "rare", and acknowledging that your own personal ideas about what failures of theory of mind are common might be either completely non-novel or wrong, or at least highly controversial and beyond just an exposition of the concept of theory of mind.

I'm not trying to dampen your enthusiasm, but trying to forestall some combination of a) presenting old hat as novel or revelatory in a way that verges on plagiarism and b) presenting controversial and unsupported (or minimally supported) ideas, including some ideas original to you, as being as well-supported as the old hat. I'm not sure either (a) or (b) is where you were going with this post, but I sort of got that feeling from it. To be a science communicator (or economics communicator, etc.) and to be a theorist are different roles, and we don't want to get them mixed up such that our communication of old, established ideas is mistaken for original theory or that our original theory, which is not yet supported and may be false, is mistaken for old, established ideas.

Happy holidays.

That’s a good and interesting point about environmentalism. I took an environmental philosophy class sometime in the early-to-mid-2010s and very long-term thinking was definitely part of the conversation. As in, thinking many centuries, millennia, or even millions of years in the future. One paper (published in 2010) we read imagined humans in the fourth millennium (i.e. from the year 3000 to 4000) living in "civilization reserves", the inverse of wilderness reverses.

My problem with interventions like improving institutional decision-making is that we are already maximally motivated to do this based on neartermist concerns. Everyone wants governments and other powerful institutions to do a better a job making decisions, to do as good a job as possible.

Let’s say you are alarmed about the Trump administration’s illiberalism or creeping authoritarianism in the United States. Does thinking about the future in 1,000 or 10,000 years actually motivate you to care about this more, to do more about it, to try harder? I don’t see how it would. Even if it did make you care a little bit more about it inside yourself, I don’t see how it would make a practical difference to what you do about it.

And taking such a long-term perspective might bring to mind all the nations and empires that have risen and fallen over the ages, and wonder if what happens this decade or the next might fade away just as easily. So, the effect on how much you care might be neutral, or it might make you care a little less. I don’t know — it depends on subjective gut intuition and each individual’s personal perspective.

Also, something like improving governments or institutions is a relay race where the baton is passed between generations, each of which makes its own contribution and has its own impact. Deflecting a big asteroid heading toward Earth is a way for a single organization like NASA to have a direct impact on the far future. But there are very few interventions of that kind. The clearest cases are existential risks or global catastrophic risks originating from natural sources, such as asteroids and pandemics. Every step you take to widen the circle of interventions you consider introduces more irreducible uncertainty and fundamental unpredictability.

I think asteroids and anti-asteroid interventions like NASA’s NEO Surveyor should be a global priority for governments and space agencies (and anyone else who can help). The total cost of solving like 95% of the problem (or whatever it is) is in the ballpark of the cost of building a bridge. I think people look at the asteroid example and think 'ah, there must be a hundred more examples of things just like that'. But in reality it’s a very short list, something like: asteroids, pandemics, nuclear weapons, bioterror, climate change, and large volcanoes. And each of these varies a lot in terms of how neglected they are.

So, I think longtermism is an instance of taking a good idea — protect the world from asteroids for the price of building a bridge, and maybe a half a dozen other things like that such as launch a satellite to observe volcanoes — and running with it way too far. I don’t think there is enough meat on this bone to constitute a worldview or a life philosophy that can be generally embraced (although hat’s off to the few who make keeping the world safe from asteroids or big volcanoes). Which overall is the mistake of effective altruism over the last decade: take one good idea or a few — like donating a lot of money to cost-effective global health charities — and try to turn it into an all-encompassing worldview or life philosophy. People are hungry for meaning in their lives, I get it, I am too, but there are healthier and unhealthier ways to pursue that, ways that are more constructive and more destructive.

But 47% (16 out of 34) put their median year no later than 2032 and 68% (23 out of 34) put their median year no later than 2035, so how significant a finding this is depends how much you care about those extra 2-5 years, I guess.

Only 12% (4 out of 34) of respondents to the poll put their median year after 2050. So, overall, respondents overwhelmingly see relatively near-term AGI (within 25 years) as at least 50% likely.

Also one would hope that it wouldn't be too long before @Forethought has cranked out one or too, as I think finding these is a big part of why they exist...

The EA Forum wiki says the Forethought Foundation was created in 2018. Apparently, though, the new organization, Forethought Research, was launched in 2025 and focuses exclusively on near-term AGI.

The Forethought Foundation apparently shut down in 2024. (According to Will MacAskill’s website and LinkedIn.)

I didn’t realize until now these were two different organizations both run by Will MacAskill, both based in Oxford, with the same name.

So, it seems that the Forethought Foundation ran for six years before shutting down and, in that time, wasn’t able to find a novel, actionable, promising longtermist intervention (other than those that had been discussed before its founding).

I hope that moral progress on animal rights/animal welfare will take much less than 1,000 years to achieve a transformative change, but I empathize with your disheartened feeling about how slow progress has been. Something taking centuries to happen is slow by human (or animal) standards but relatively fast within the timescales that longtermism often thinks about.

The only intervention discussed in relation to the far future at that first link is existential risk mitigation, which indeed has been a topic discussed within the EA community for a long time. My point is that if such discussions were happening as early as 2013 and, indeed, even earlier than that, and even before effective altruism existed, then that part of longtermism is not a new idea. (And none of the longtermist interventions that have been proposed other than those relating to existential risk are novel, realistic, important, and genuinely motivated by longtermism.) Whether people care if longtermism is a new idea or not is, I guess, another matter.

How much reduction in funding for non-AI global catastrophic risks has there been…?

I agree with your first paragraph (and I think we probably agree on a lot!), but in your second paragraph, you link to a Nick Bostrom paper from 2003, which is 14 years before the term "longtermism" was coined.

I think, independently from anything to do with the term "longtermism", there is plenty you could criticize in Bostrom's work, such as being overly complicated or outlandish, despite there being a core of truth in there somewhere.

But that's a point about Bostrom's work that long predates the term "longtermism", not a point about whether coining and promoting that term was a good idea or not.

Load more