Yarrow Bouchard 🔸

1012 karmaJoined Canadamedium.com/@strangecosmos

Bio

Pronouns: she/her or they/them. 

I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.

Comments
305

Topic contributions
1

I just posted an answer. I hope you find it helpful!

Hi Andreu. The EA Forum definitely has a lot of stuff about AI because that's the hot topic to talk about, and it sure seems like a lot of people in the movement these days are focused on AI, but according to a survey in 2024, the top priority cause area for 29% of people in EA is global poverty and global health and the top priority for 31% of people is AI risk, so AI risk and global poverty/health are about tied — at least on that metric. (Another way of averaging the data from the same survey puts global poverty/health slightly ahead of AI risk.)

The last survey to ask where people in EA were donating is from way back in 2020. A whole lot has changed since 2020. For what it's worth, 62% of respondents to that survey said they were donating to global health and development charities, 27% said animal welfare, and 18% said AI and "long term". 

The 2020 survey also found 16% of people named global poverty as their top cause, while 14% said AI risks. It's interesting that this is true given where people said they donated in 2020. I would guess that's probably because, regardless of which cause area you think is more important, it's not clear where you would donate if you wanted to reduce AI risk, whereas with global poverty there are many great options, including GiveWell's top charities. So, maybe even now, more people are donating to charities related to global poverty than to AI risk, but I don't know about any actual data on that.

By the way, if you click "Customize feed" on the EA Forum homepage, you can reduce or fully hide posts about any particular topic. So, you could see fewer posts on AI or just hide them altogether, if you want. 

Also, if you want to read posts expressing skepticism about AI risk, the forum has an "AI risk skepticism" tag that makes it easy to find posts about that. You have different options for sorting these posts that will show you different stuff. "Top" (the default) will mostly show you posts from years ago. "New & upvoted" will mostly show you posts from within the last year (including some of mine!).

I mean I agree that independent scrutiny is good, that it's great if someone volunteers to do that, and it would be cool if someone could be paid to do that, but it's way understating it to say the issue with Vetted Causes was an insufficiently "professional tone" or that its work was not "up to the standards of paid full-time professionals". In my view, Vetted Causes did at least one thing that in a professional context would probably be considered an ethics violation, and might even open up an organization to legal liability. 

Specifically, Vetted Causes accused a charity of fraud when that wasn't at all true, and they didn't retract the accusation after people pointed out it wasn't true. That's obviously unethical, and a lawsuit definitely wouldn't be worthwhile, nor would it set a good precent for the EA community, but it's the sort of thing you could sue someone for. It goes beyond just criticism, it's saying something false — something that Vetted Causes should have known better than to believe — in a way that would have been really damaging if people believed the falsehood.

"Thou shalt not bear false witness against thy neighbor".

Thanks for sharing the papers. Some of those look really interesting. I’ll try to remember to look at these again when I think of it and have time to absorb them. 

What do you think of the Arch Mission Foundation's Nanofiche archive on the Moon?

Wouldn’t a global totalitarian government — or a global government of any kind — require advanced technology and a highly developed, highly organized society? So, this implies a high level of recovery from a collapse, but, then, why would global totalitarianism be more likely in such a scenario of recovery than it is right now? 

I have personally never bought the idea of “value lock-in” for AGI. It seems like an idea inherited from the MIRI worldview, which is a very specific view on AGI with some very specific and contestable assumptions of what AGI will be like and how it will be built. For instance, the concept of “value lock-in” wouldn’t apply to AGI created through human brain emulation. And for other technological paradigms that could underlie AGI, are they like human brain emulation in this respect or unlike it? But this is starting to get off-topic for this post. 

I guess you can put a lot of meaning into a little symbol. I wouldn’t interpret a cross or an astrology sign as conveying a sense of superiority, necessarily, I would just think that person is really into being Christian or really into astrology. 

If you see someone wearing a red ribbon relating to HIV/AIDS, I guess you could have the Curb Your Enthusiasm reaction of: “Wow, so they’re trying to act like they’re so much better than me because they care so much about AIDS? What a jerk!” Or you could just think, “Oh, I guess they care about AIDS for some reason.”

I’ve never perceived anyone to be using the little blue and orange diamond icons to signal superiority. I interpret it as something more supportive and positive. It’s reassuring to see other people do something altruistic so you don’t feel crazy for doing it, and making a sacrifice feels more bearable when you see other people doing it too. (Imagine how different it would feel if when you donated blood, you did it completely alone in an empty room vs. seeing lots of other people around who are giving blood at the same time too.)

I’ve never observed anyone trying to police someone over donating 10% of their income, or trying to pressure them to take the pledge, or judging them for not taking it. For all I know, that has happened to somebody somewhere, I’ve just never seen it, personally. 

I would say don’t worry too much about the 10% income pledge and just focus on whatever amount of donating or way of donating makes sense for you personally. 

I would be concerned about people deciding to delay their donating by 40-50 years (or whatever it is), since there are probably huge opportunity costs. I hope that in 40-50 years all the most effective charities are way less cost-effective than the most effective charities today because we will have made so much progress on global poverty, infectious diseases, and other problems. I hope malaria and tuberculosis aren’t ongoing concerns in 40-50 years, meaning the Against Malaria Foundation wouldn’t even exist anymore — mission accomplished! But you said you’re already donating about 1% of your income every year, so you’re not holding off completely on donating. 

Hi Zoe. I'm glad you've crossed over from lurking to participating. I gave this post an upvote even though I disagree with a lot of it, even though I wanted to agree. I agree with this part:

the EA community does (subconsciously) enforce quite a bit of uniformity in thoughts and actions — everyone generally agrees on the most important causes and the most effective ways to contribute to these causes

The conformity is way too high, and the level of internal agreement is way too high/lack of internal disagreement is way too low.

When I was involved in organizing my university EA group, one conversation we had was about the value of art. Someone in our group talked about a novel she had found important and impactful. Can we really say that anti-malarial bednets are more important than art? I think a lot of people in EA feel (and, indeed, in our EA group at the time felt) a temptation to argue back against this point. But there's a more intriguing and more expansive conversation to be had if you don't argue back, take a breath, and really consider her point. (For example, have you considered the impact sci-fi has had on real life science and technology? Have the considered the role fiction plays in teaching us moral lessons? Or in understanding emotions and relationships, which are what life is all about?)

I think, in general, it's way more interesting to have a mix of people with diverse personalities, interests, and points of view, even when that means sometimes entertaining some off-the-wall ideas. (I don't think what that person said about art was off-the-wall at all, but talk to enough random people about EA online or in real life and you'll eventually hear something unexpected.)

This is the part of your post I have the hardest time with:

I wonder if the orange or blue diamonds are sending the right signals (do we have data on how people hear about the pledge vs. their chance of taking it?). The little icon next to user names in social media is giving “cult” vibes again (think a cross or an astrological sign next to someone’s user name).

Is the little orange or blue diamond so different from someone having an emoji in their username, or, in real life, wearing a little pink or red ribbon for breast cancer of HIV/AIDS awareness? I have a hard time relating to your perspective because if on Twitter or wherever I saw someone put a cross or an astrological sign next to their name, I think I would just assume they are religious or really into astrology. I wouldn't find it particularly scary or cult-y.

Personally I wish the EA Forum had more ways to zhuzh up how your username appears on posts and comments. The little diamonds are the only bit of colour we get around here. 

Full-on profile pictures embedded in posts and comments might be too distracting, but I don't know... coloured usernames? Little badges to represent things like your country, your favourite cause area, or your identity (e.g. LGBT)? I find one advantage of having something like this is not just the zhuzh but also it makes it easier to remember who's who rather than having to memorize everyone's names. The little blue and orange diamonds already help a bit with this.

[Edit: I decided to zhuzh up my username with emojis because it looks ridiculous but also kinda cute and it really made me laugh. Lol.]

having my name on a public list and being asked to report my donations all the time for the rest of my life would definitely overwhelm me to the point of deterrence

Is this really what Giving What We Can asks you to do these days? I took the 10% pledge back in 2008 or 2009. I have no idea if my name is still on a public list and I don't think I have ever once reported my donations. I can empathize with hating the administration burden part of it because I really struggle with admin tasks of all kinds (I think a lot of people do) and I find a lot of admin stuff miserable and demoralizing. 

I guess the point of reporting your donations is so that GWWC can say how much money people are donating as part of this movement, but obviously that's of secondary importance (a very, very distant second) to actually donating the money. I always saw the 10% pledge as a personal, spiritual commitment and not a promise I made to anyone else. Nor as something I was obligated to report. It's a reminder to myself of what my values are: "hey, remember you said you were going to do this??"

So, if you feel you want to do the pledge but don't want to do the admin, just do the pledge and don't do the admin. :)

In fact, wouldn’t it be much easier in general for people to conceptualize and pledge a certain % of their total assets to EA causes upon passing instead of doing it every year?

Would it be? You'd be asking people to think about dying, which isn't easy. Also, you'd be asking them to write a will, which is a lot of admin! 

Also, if the average person who is interested in EA is 38 years old — which is Will MacAskill's age — and their average life expectancy is 80, doesn't that mean no one would donate anything to charity for, on average, the next 42 years? And wouldn't that be really bad? 

I think your idea of donating a percentage of your passive income from capital gains to charity after you retire early is perfectly fine — that's just donating a percentage of your income, which is the whole idea in the first place. Maybe you'll want to donate less than 10% and that's fine too. 

I think everyone should find what works for their particular situation. The 10% pledge is formulated to be something that could apply to the majority of the population in high-income countries, but not something that necessarily makes the most sense for everyone in those countries. 

“Sound like AI”... When I talk to my EA friends, they don’t sound like AI-generated academic papers...

"Sounds like AI" is the wrong way to put this. Posts on the EA Forum don't sound like AI. They have a distinct voice that is different from ChatGPT, Claude, or Gemini. LLMs have a distinctive bland, annoying, breathless, insubstantial, and absolutely humourless style. The only thing really similar to the EA Forum style and LLM style is the formal tone. Maybe EA Forum posts sound like academic papers, but they don't sound like AI-generated academic papers.

I know because I've read a lot of stuff on the EA Forum and a lot of stuff written by AI. I can really tell the difference.

EA is also associated with obscure (to gen pop) concepts like longtermism, accelerationism, micromorts etc. ... When I talk to my EA friends... our colloquial/ less researched exchanges can feel more convincing than reading way too many stats and big words.

This is more accurate. EA/the EA Forum has its own weird subculture and sublanguage and it's pretty annoying. People use lingo and jargon that isn't useful or clear, and sometimes has never even been defined — I hate the term "truthseeking" for this reason, what does it mean? (As far as I know, it's literally never been defined, anywhere, by anyone. And it's ambiguous. So, why is that term helpful or necessary?) People assume too much background knowledge and don't explain things in an accessible way, which wouldn't just help newcomers, but would also help everyone. 

What you said about casual, informal conversations with your EA friends being more persuasive is an argument in favour of people in EA having more casual, informal conversations on the EA Forum, or on podcasts, or whatever. Before I read your post, I already had the intuition that this would be a good idea. 

I want to suggest to everyone the concept of doing public dialogues on the EA Forum, following the model of the Slack chats that FiveThirtyEight used to do on their blog. The FiveThirtyEight staff would pick a topic, chat about it on Slack, and then do some light editing (e.g. to add links/citations). Then they'd publish that on their blog. I think this could work really well for the EA Forum. You could either do the chat in real time (synchronously) or take time doing it (asynchronously). But I think it would be more fun if people didn't spend too much time writing each message, and if they tried to be more casual and informal and conversational than EA Forum posts typically are. I just have a hunch that this would be a good format. (And anyone can message me if they want to try this with me.)

In terms of length, personally, I'm not as concerned with how long something is as I am with its economy of words. I don't like when things are long and they're longer than they could have been. If something's long but it's still as short as it could have been, that's great. (That's why books exist!!) If something's long and I feel like it could have been 20% of its length, that's a huge drag. If something's short but it makes a complete point and says everything it really needs to say, that's like a delightful piece of candy. I love reading stuff like that. But not everything can be candy. (And if we feel like it should be, maybe we can blame Twitter for conditioning us to want everything to be said in 140-280 characters.)

What makes something feel longer or shorter is also how enjoyable it is to read, so it's also a matter of craft and style. 
 

I think where academic publishing would be most beneficial for increasing the rigour of EA’s thinking would be AGI. That’s the area where Tyler Cowen said people should “publish, publish, publish”, if I’m correctly remembering whatever interview or podcast he said that on. 

I think academic publishing has been great for the quality of EA’s thinking about existential risk in general. If I imagine a counterfactual scenario where that scholarship never happened and everything was just published on forums and blogs, it seems like it would be much worse by comparison. 

Part of what is important about academic publishing is exposure to diverse viewpoints in a setting where the standards for rigour are high. If some effective altruists started a Journal of Effective Altruism and only accepted papers from people with some prior affiliation with the community, then that would probably just be an echo chamber, which would be kind of pointless. 

I liked the Essays on Longtermism anthology because it included critics of longtermism as well as proponents. I think that’s an example of academic publishing successfully increasing the quality of discourse on a topic. 

When it comes to AGI, I think it would be helpful to see some response to the ideas about AGI you tend to see in EA from AI researchers, cognitive scientists, and philosophers who are not already affiliated with EA or sympathetic to its views on AGI. There is widespread disagreement with EA’s views on AGI from AI researchers, for example. It could be useful to read detailed explanations of why they disagree. 

Part of why academic publishing could be helpful here is that it’s a commitment to serious engagement with experts who disagree in a long-form format where you’re held to a high standard, rather than ignoring these disagreements or dismissing them with a meme or with handwavy reasoning or an appeal to the EA community’s opinion — which is what tends to happen on forums and blogs. 

EA really exists in a strange bubble on this topic, its epistemic practices are unacceptably bad, scandalously bad — if it’s a letter grade, it’s an F in bright red ink — and people in EA could really improve their reasoning in this area by engaging with experts who disagree without the intent to dismiss or humiliate them, but to actually try to understand why they think what they do and seriously consider if they’re right. (Examples of scandalously bad epistemic practices include many people in EA apparently never once even hearing that an opposing point of view on LLMs scaling to AGI even exists, despite it being the majority view among AI experts, let alone understanding the reasons behind that view, some people in EA openly mocking people who disagree with them, including world-class AI experts, and, in at least one instance, someone with a prominent role who responded to an essay on AI safety/alignment that expressed an opposing opinion without reading it, just based on guessing what it might have said. These are the sort of easily avoidable mistakes that predictably lead to having poorly informed and poorly thought-out opinions, which, of course, are more likely to be wrong as a result. Obviously these are worrying signs for the state of the discourse, so what's going on here?)

Only weird masochists who dubiously prioritize their time will come onto to forums and blogs to argue with people in EA about AGI. The only real place where different ideas clash online — Twitter — is completely useless for serious discourse, and, in fact, much worse than useless, since it always seems to end up causing polarization, people digging in on opinions, crude oversimplification, and in-group/out-group thinking. Humiliation contests and personal insults are the norm on Twitter, which means people are forming their opinions not based on considering the reasons for holding those opinions, but based on needing to “win”. Obviously that’s not how good thinking gets done.

Academic publishing — or, failing that, something that tries to approximate it in terms of the long-form format, the formality, the high standards for quality and rigour, the qualifications required to participate, and the norms of civility and respect — seems the best path forward to get that F up to a passing grade. 

M-Discs are certainly interesting. What's complicated is that the company that invented M-Discs, Millenniata, went bankrupt, and that has sort of introduced a cloud of uncertainty over the technology. 

There is a manufacturer, Verbatim, with the license to manufacture discs using the M-Disc standard and the M-Disc branding. Some customers have accused Verbatim of selling regular discs with the M-Disc branding at a huge markup and this accusation could be completely wrong and baseless — Verbatim has denied it — but it's sort of hard to verify what's going on anymore. 

If Millenniata were still around, they would be able to tell us for sure whether Verbatim is still complying properly with the M-Disc standard and whether we can rely on their discs. I don't understand the nuances of optical disc storage well enough to really know what's going on. I would love to see some independent third-party who has expertise in this area and who is reputable and trustworthy tell us whether the accusations against Verbatim are really just a big misunderstanding. 

Millenniata's bankruptcy is an example of the unfortunate economics of archival storage media. Rather than pay more for special long-lasting media, it's far more cost-effective to use regular, short-term storage media — today, almost entirely hard drives — and periodically copy over the data to new media. This means the market for archival media is small. 

As for how many physical locations digital data is kept in, that depends on what it is. The CLOCKSS academic archive keeps digital copies of 61.4 million academic papers and 550,000 books in 12 distinct physical locations. I don't know how Wikipedia does its backups, mirroring, or archiving internally, but every month an updated copy of the English Wikipedia is released that anyone can download. Given Wikipedia's openness, it is unusually well-replicated across physical locations, just considering the number of people who download copies. 

I also don't know how the EA Forum manages its backups or archiving internally, but a copy of posts can be saved using the Wayback Machine, which will create at least 2 additional physical copies on the Internet Archive's servers. I don't know what Google does with YouTube videos. I think for Google Drive data they keep enough data to recover files in at least two physically separate datacentres, but those could be two datacentres in the same region. I also don't know if they do the same for YouTube data — I hope so.

I think in the event of a global catastrophe like a nuclear war, what we should think about is not whether the data would physically survive somewhere on a hard drive, but, more practically, whether it would ever actually be recovered. If society is in ruins, then it doesn't really matter if the data physically survives somewhere unless it can be accessed and continually copied over so that it's preserved. Since hard drives last for such a short time, the window of time for society to recover enough to find, access, and copy the data from hard drives is quite narrow.

I don't know if you were asking about paper books or ebooks, but for paper books, it seems clear that for any book on the New York Times bestseller list, there must be at least one copy of that book in many different libraries, bookstores, and homes in many locations. I don't know how to think about the probability of copies ending up in Argentina, Iceland, or New Zealand, but it seems like at least a lot of English bestsellers must end up in various libraries, stores, and homes in New Zealand. 

Paper books printed on acid-free paper with a 2% alkaline reserve, which, as far as I understand, is the standard for paper books printed over the last 20 years or so, are expected to last over 100 years provided they are kept in reasonably cool, dry, and dark conditions. I'm not sure how exactly the longevity would be estimated to change for books kept in a tropical climate vs. a temperate one. The 2% alkaline reserve on the paper is so that as the natural acid in the cellulose in the paper is slowly released over time, the alkaline counteracts it and keeps the paper neutral. Paper is really such a fascinating technology and more miraculous than we give it credit for. 

Vinyl records are more important for preserving culture — specifically music — rather than knowledge or information, but it's interesting that vinyl sales are so high and that vinyl would probably end up being the most important technology for the preservation of music in some sort of global disaster scenario. In 2024, the top ten bestselling albums on vinyl in the U.S. sold between 175,000 copies (for Olivia Rodrigo at #10) and 1,489,000 copies (for Taylor Swift at #1). The principle here is the same as for paper books. You have to imagine these records are spread out all over the United States. Given that both vinyl records and many of the same musicians are popular in other countries like Canada, the UK, Australia, and New Zealand, it seems likely there are many copies elsewhere in the world too. 

Since looking into this topic, I have warmed considerably on vinyl. I didn't really get the vinyl trend before. I guess I still don't, really, but now I think vinyl is a wonderful thing, even if the reasons people are buying it are not that it makes the preservation of music more resilient to a global disaster.

I didn't need any convincing to be fond of paper books, but paper just seems more and more impressive the more I think about it. 

Load more