Yarrow Bouchard 🔸

1359 karmaJoined Canadastrangecosmos.substack.com

Bio

Pronouns: she/her or they/them. 

Parody of Stewart Brand’s whole Earth button.

I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.

Sequences
2

Criticism of specific accounts of imminent AGI
Skepticism about near-term AGI

Comments
620

Topic contributions
2

Sometimes for social change, having the older generation die off or otherwise lose power is useful. There's not much our hypothetical activist could do to accelerate that. One might think, for instance, that a significant decline in religiosity and/or the influence of religious entities is a necessary reagent in this model. While one could in theory put money into attempting to reduce the influence of religion in 1900s public life, I think there would be good reasons not to pursue this approach. Rather, I think it could make more sense for the activist to let the broader cultural and demographic changes to do some of the hard work for them.

I don't agree with this causal model/explanatory theory.

This is some kind of at least partly deterministic theory about culture that says culture is steered by forces that can't be steered by human creativity, agency, knowledge, or effort. I don't agree with that view. I think culture is changed by what people decide to do.

That's not an accounting trick in my book -- there are clear redistributive effects here. If I spend my money on basic science to promote hologram technology, the significant majority of the future benefits of my work are likely going to flow to future for-profit hologram companies, future middle-class+ people in developed countries, and so on. Those aren't the benefits I care about, and Big Hologram isn't likely to pay it forward by mailing a bunch of holograms to disadvantaged children (in your terminology, they are going to free-ride off my past efforts).

That depends on two things:

  1. If I fund research, then no one else in the future will subsidize the technology and provide it for free.
  2. If I don't fund research, somebody else will.

I guess it could theoretically be true that both assumptions are correct, and maybe we can imagine a scenario where you would have good reasons to believe both of these things, but in practice, in reality, I think it's rare that we ever really know things like that. So, while it's possible to imagine scenarios where the upfront money will definitely be supplied by someone else and the down-the-line money definitely won't, what does this tell us about whether this is a good idea in practice?

The hologram example is making the point: if the pool of dollars required to produce an outcome is a certain amount, the overall cost-effectiveness of producing that outcome doesn't change regardless of which dollars are yours or not. I think your point is: your marginal cost-effectiveness could be much higher or lower depending on what's going to happen if you do nothing. Which is true, I just don't think we can actually know what's going to happen if you do nothing, and the best version of this still seems to be guesswork or hunches.

It also seems like an oddly binary choice of the sort that doesn't really exist in real life. If you have significant philanthropic money, can you really not affect what others do? Let's flip it: if another philanthropist said they would subsidize holograms down the line, that would affect what you would do. So, why not think you have the same power?

What seems to be emerging here is an overall theme of: 'the future will happen the way it's going to happen regardless of what we do about it' vs. 'we have the agency to change how events play out starting right now'. I definitely believe the latter, I definitely disbelieve the former. We have agency. And, on the other hand, we can't predict the future.

Who was it who recently quoted someone, maybe the physicist David Deutsch or the psychologist Steven Pinker, saying something like: how terrible would it be if we could predict the future? Because that would mean we had no agency.

The first post listed there is from March 2, 2020, so that's relatively late in the timeline we're considering, no? That's 3 days later than the February 28 post I discussed above as the first/best candidate for a truly urgent early warning about covid-19 on LessWrong. (2020 was a leap year, so there was a February 29.)

That first post from March 2 also seems fairly simple and not particularly different from the February 28 post (which it cites).

YARROW: Boy, one would have to be a complete moron to think that COVID-19 would not be a big deal as late as Feb 28 2020, i.e. something that would imminently upend life-as-usual. ... What kind of complete moron would not see what’s happening here? Why is lesswrong patting themselves on the back for noticing something so glaringly obvious?

Not at all accurate. That's not what I'm saying at all. It was a situation of high uncertainty, and the appropriate response was to be at least somewhat unsure, if not very unsure — yes, take precautions, think about it, learn about it, follow the public health advice. But I don't think on February 28 anyone knew for sure what would happen, as opposed to made an uncertain call that turned out to be correct. The February 28 post I cite gives that sort of uncertain, precautionary advice, and I think it's more or less reasonable advice — just a general 'do some research, be prepared' sort of thing. 

It's just that the post goes so far in patting itself on the back for being way ahead on this, when if someone in the LessWrong community had just posted about the CDC's warning on the same day it was issued or had posted about it when San Francisco declared a public health emergency, or had made post noting that the S&P 500 had just fallen 7.5% and maybe that is a reason to be concerned, that would have put the first urgent warning about the pandemic a few days ahead of the February 28 post. 

The takeaway of that post, and the takeaway of people who congratulate the LessWrong community on calling covid early, is that this is evidence that reading Yudkowsky's Sequences or LessWrong posts or whatever promotes superior rationality, and is a vindication of the community's beliefs. But that is the wrong conclusion to draw if something like 10-80% of the overall North American population (these figures are loosely based on polling cited in another comment) was at least equally concerned about covid-19 at least as early. 99.999% of the millions of people who were as concerned or more as early or earlier than the LessWrong community haven't read the Sequences and don't know what LessWrong is. A strategy that would have worked better than reading the Sequences or LessWrong posts is: just listen to what the CDC is saying and what state and local public health authorities are saying. 

It's ridiculous to draw the conclusion that this a vindication of LessWrong's approach.

Dominic Cummings cited seeing the smoke as being very influential in jolting him to action (and thus impacting UK COVID policy), see screenshot here.

I don't see this as a recommendation for LessWrong, although it sure is an interesting historical footnote. Dominic Cummings doesn't appear to be a credible person on covid-19. For example, in November 2024 he posted a long, conspiratorial tweet which included:

"The Fauci network should be rolled up & retired en masse with some JAILED. 
And their media supporters - i.e most of the old media - driven out of business."

The core problem there is not that he hasn't read LessWrong enough. (Indeed, reading LessWrong might make a person more likely to believe such things, if anything.)

Incidentally, Cummings also had a scandal in the UK around allegations that he inappropriately violated the covid-19 lockdown and subsequently wasn't honest about it.

My personal experience: As someone living in normie society in Massachusetts USA but reading lesswrong and related, I was crystal clear that everything about my life was about to wrenchingly change, weeks before any of my friends or coworkers were. And they were very weirded out by my insistence on this.

Tens of millions if not hundreds of millions of people in North America had experiences similar to this. The level of alarm spread across the population gradually from around mid-January to mid-March 2020, so at any given time, there were a large number of people who were much more concerned than another large number of people.

I tried to convince my friends to take covid more seriously a few days before the WHO proclamation, the U.S. state of emergency declaration, and all the rest made it evident to them that it was time to worry. I don't think I'm a genius for this — in fact, they were probably right to wait for more convincing evidence. If we were to re-run the experiment 10 times or 100 times, their approach might prove superior to mine. I don't know. 

A funny example that sticks in my memory is a tweet by Eliezer from March 11 2020. Trump had just tweeted:

This is ridiculous. Do you think these sort of snipes are at all unique to Eliezer Yudkowsky? Turn on Rachel Maddow or listen to Pod Save America, or follow any number of educated liberals (especially those with relevant expertise or journalists who cover science and medicine) on Twitter and you would see this kind of stuff all the time. It's not an insight unique to Yudkowsky that Donald Trump says ridiculous and dangerous things about covid or many other topics.

I haven't looked into it, but any and all new information that can give a fuller picture is welcome.

I recommend looking at the Morning Consult PDF and checking the different variations of the question to get a fuller picture. People also gave surprisingly high answers for other viruses like Ebola and Zika, but not nearly as high as for covid.

Let's look at the data a bit more thoroughly.

It's clear that in late January 2020, many people in North America were at least moderately concerned about covid-19. 

I already gave the example of some stores in a few cities selling out of face masks. That's anecdotal, but a sign of enough fear among people to be noteworthy.

What about the U.S. government's reaction? The CDC issued a warning about travelling to China on January 28 and on January 31, the U.S. federal government declared a public health emergency, implemented a mandatory 14-day quarantine for travelers returning to China, and implemented other travel restrictions. Both the CDC warning and the travel restrictions were covered in the press, so many people knew about it, but even before that happened, a lot of people said they were worried.

Here's a Morning Consult poll from January 24-26, 2020:

An Ipsos poll of Canadians from January 27-28 found similar results:

Half (49%) of Canadians think the coronavirus poses a threat (17% very high/32% high) to the world today, while three in ten (30%) think it poses a threat (9% very high/21% high) to Canada. Fewer still think the coronavirus is a threat to their province (24%) or to themselves and their family (16%).

Were significantly more than 37% of LessWrong users very concerned about covid-19 around this time? Did significantly more than 16% think covid-19 posed a threat to themselves and their family?

It's hard to make direct, apples-to-apples comparisons between the general public and the LessWrong community. We don't have polls of the LessWrong community to compare to. But those examples you gave from January 24-January 27, 2020 don't seem different from what we'd expect if the LessWrong community was at about the same level of concern at about the same time as the general public. Even if the examples you gave represented the worries of ~15-40% of the LessWrong community, that wouldn't be evidence that LessWrong users were doing better than average.

I'm not claiming that the LessWrong community was clearly significantly behind. If it was behind at all, it was only by a few days or maybe a week tops (not much in the grand scheme of things), and the evidence isn't clear or rigorous enough to definitively draw a conclusion like that. My claim is just that the LessWrong community's claim to have called the pandemic early is pretty clearly false or at least, so far completely unsupported.

I don't think anyone should be able to confidently say that we are more than a single 10x or breakthrough away from machines being smarter than us.

Very prominent deep learning experts who are otherwise among the most bullish public figures in the world on AI such as Ilya Sutskever (AlexNet co-author, OpenAI co-founder and Chief Scientist, now runs Safe Superintelligence) and Demis Hassabis (DeepMind co-founder, Google DeepMind CEO, Nobel Prize winner for AI work) both say that multiple research breakthroughs are needed. Sutskever specifically said that another 100x scaling of AI wouldn't be that meaningful. Hassabis specifically names three breakthroughs that are needed: continual learning, world models, and System 2 thinking (reasoning, planning) — that last one seems like it might be more than a single research breakthrough, but this is how Hassabis frames the matter. Sutskever and Hassabis are the kind of AI capabilities optimists that people cite to bolster arguments for short timelines, and even they're saying this.

There are other world-class experts who say similar things, but they are better known as skeptics of LLMs. Yann LeCun (Meta AI's departing Chief Scientist, won the Turing Award for his pioneering work in deep learning) and Richard Sutton (won the Turing Award for his pioneering work in reinforcement learning) have both argued that AGI or human-level AI will take a lot of fundamental research work. LeCun and Sutton have also both gone through the exceptional step of sketching out a research roadmap to AGI/human-level AI, i.e., LeCun's APTAMI and Sutton and co-authors' Alberta Plan. They are serious about this, and they are both actively working on this research.

I'm not cherry-picking; this seems to be the majority view. According to a survey from early this year, 76% of AI experts don't think LLMs or other current AI techniques with scale to AGI.

That seems like the crux of the matter!

It might, but I cited a number of data points to try to give an overall picture. What's your specific objection/argument?

Load more