Hide table of contents

TLDR: No. But I cautiously trust many common EA/rationalist opinions.

When I’m searching for help online, I start some of my search queries with prefixes such as site:lesswrong.com. That means Google will only return search results from LessWrong

I’ve searched site:lesswrong.com cold shower, site:lesswrong.com optimal tooth brushing, site:lesswrong.com wirecutter, site:astralcodexten.substack.com aromatherapy, and site:forum.effectivealtruism.org where should I live.

LessWrong is the site of the rationalist community. They imply they’re less wrong than everyone else. Astral Codex Ten is a blog by the prominent rationalist Scott Alexander. His rationalist fame suggests he’s especially less wrong. And the EA Forum is the forum of the effective altruism (EA) movement. Effective altruism is about “doing good better.”

But are the rationalists really less wrong? Are the effective altruists truly doing good better? How can I evaluate them in an as unbiased way as possible? After all, I don’t only read rationalist content for general (i.e., Lifehacker-esque) productivity advice. I read what rationalists say about rationality. They influence how I think.

Isn’t that a lot of trust to place in communities I got into because I liked the blog of the Michael Jordan of coding bootcamps?[1] How have the rationalists changed me?[2]

What I’ve Learned From The Rationalists

I think the following list contains the most important lessons about rationality that I’ve learned from rationalists.

Always Be Rational

When I moved to San Francisco in 2015, I caught the startup bug.[3] I came up with new company ideas all the time and dreamed of getting into Y Combinator.

Enough people are the same way that there’s no shortage of advice for wannabe entrepreneurs. I’d hear tropes like “founders should be overconfident,” “fake it till you make it,” and “move fast and break things.” 

I agree with the spirit of those statements.[4] If someone doesn’t believe in themself enough, or they’re not willing to take enough risks, I wouldn’t bet on their startup succeeding. I think it makes sense to “fake it” (i.e., pretend to be confident and/or lie) when appropriate too.

But I wouldn’t take those tropes too seriously. Sometimes I’ve been too confident in my ability. I’ve faked it by telling people I’d complete a task by a certain time and failed to do it. And sometimes, I don’t think it’s worth it to risk breaking something. Meta (Facebook) changed its motto from “Move fast and break things” to “Move fast with stable infrastructure.” That seems fair if they still lose close to $163,565 every minute the app goes down.

I refer to those tropes as reversible advice. Scott Alexander suggests considering the opposite of the advice you’re receiving if 1) there are plausibly near-equal groups of people who need this advice versus the opposite advice, or 2) you’ve self-selected into the group of people receiving this advice by, for example, being a fan of the blog / magazine / TV channel / political party / self-help-movement offering it. 

And The Scout Mindset, by Julia Galef, implies that nobody should be overconfident. It describes how when Elon Musk founded SpaceX, he thought there was a 10% chance that a SpaceX craft would make it into orbit. It states he thought there was a 10% chance Tesla would succeed too.[5] And that Jeff Bezos thought there was a 30% chance Amazon would succeed.[6]

Musk said, “If something's important enough, you should try. Even if the probable outcome is failure.” Page 113 of The Scout Mindset may have subtly inspired me to use this example. I think that suggests he makes bets that maximize his expected utility.

Maximize Expected Utility

I define being rational as making decisions that maximize expected utility.

Imagine someone is about to roll a traditional six-sided die.[7] You have the opportunity to bet $1 million that the die will land on 1. If you win, you get another $7 million. Otherwise, you lose everything.

The expected value of this bet is the amount of money you’d expect to make from it. That would be $333,333.33.[8]

So should you make this bet? If you can make this bet an unlimited number of times, and you’d like more money, definitely.

But what if you have exactly $1 million, no source of income, you don’t know what you’d do with $7 million, and you’re only allowed to make this bet once? 

You can use utility points that reflect what you fundamentally value to make this decision. You could fundamentally value anything, such as how long you’ll live, your dignity, or your happiness. Let’s pretend you fundamentally value happiness. You may decide losing $1 million would decrease your happiness by 100 hypothetical happiness points. And winning $7 million would increase your happiness by 200 points. In this case, your expected utility is -50 happiness points.[9] 

As someone who wouldn’t want to lose all of my money, would I have made this bet before I’d read about maximizing expected utility? I don’t think so. I was already doing the math implicitly.

I still normally do the math implicitly. But I think its been helpful to more consciously think about my values and probabilities. To make decisions for myself, and to resolve disagreements with others.[10]

Values, Probabilities And Semantics Cause All Disagreement

Holden Karnofsky discusses the idea that if people directly stated their values (i.e., what they care about) and probabilities (i.e., their odds something is true), they’d always understand why they disagree with someone. This makes sense to me. Since, as Karnofsky says, people don’t always communicate clearly, I’d say almost every disagreement comes down to at least one of values, probabilities, or semantics (i.e., what people mean by what they say).

I don’t think there’s a foolproof way to resolve a disagreement over values. Who am I to tell you how much happiness you’d receive from winning $7 million?

But hopefully, talking things over can resolve semantic debates. And disagreements over probabilities can be tested.[11]

The Importance Of The Experimental Method

In the second chapter of Harry Potter and the Methods of Rationality[12], by Eliezer Yudkowsky, Professor McGonagall turns into a cat in front of Harry. Harry freaked out.

Harry was breathing in short gasps. His voice came out choked. "You can't DO that!"

"It's only a Transfiguration," said Professor McGonagall. "An Animagus transformation, to be exact."

"You turned into a cat! A SMALL cat! You violated Conservation of Energy! That's not just an arbitrary rule, it's implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signalling! And cats are COMPLICATED! A human mind can't just visualise a whole cat's anatomy and, and all the cat biochemistry, and what about the neurology? How can you go on thinking using a cat-sized brain?"

Professor McGonagall's lips were twitching harder now. "Magic."

"Magic isn't enough to do that! You'd have to be a god!"

And then Harry collected himself. He thought “The March of Reason would just have to start over, that was all; they still had the experimental method and that was the important thing.”

I learned about the scientific method in elementary school. But I never appreciated it until I read this passage. Even if everything I think I know turns out to be wrong, I can always find the truth through experimentation. 

But I think I ultimately believe what I want to believe.

Confirmation Bias Is Everywhere

I’d heard of confirmation bias before I’d heard of the rationalist community. I would’ve defined it as believing what you want to believe. And I’d still use that definition. But I’d narrowly thought of confirmation bias as a reason I’d look for evidence to justify why I’d be successful (e.g., why my startup will succeed) or why I should feel intelligent (e.g., why my political opinion is correct).

I appreciate how The Scout Mindset showed me how confirmation bias could lead me to believe something negative about myself.[13] For example, I remember the first time I was exposed to someone I thought might have coronavirus on April 20, 2020. My gut instinct was that if my roommate had covid that he’d probably already spread it to me and my roommates. So there was nothing I could do. I believe I specifically said something like we’re all fucked or screwed. 

My assumption that my roommate could’ve already given me covid still feels reasonable. It was convenient and incorrect to assume there was nothing I could do. I could’ve started wearing a mask, socially distanced, and encouraged my roommates to do the same. I could’ve left my house. My personal coronavirus risk tolerance has changed over time. The point is that I didn’t have to assume I was fucked or screwed. I had a choice.

Similarly, from 2016 to 2021, I generally felt 100% certain that I should focus my self-improvement efforts on becoming a better software engineer. After all, it was too late to switch careers. That belief motivated me to code. 

However, I shouldn’t have been so certain. I didn’t have to code. I told myself that so I could believe I didn’t have a decision to make. That made me happy immediately. Yes, thinking about what to do can be stressful. But it’s often worth it.[14]

Conclusion

I don’t think the rationalists have fundamentally reshaped me. Before finding the rationalist community, I wouldn’t have suggested being irrational, ignoring the experimental method, or succumbing to confirmation bias. The rationalists gained my trust by telling me things that I already believed or were open to believing in ways that helped me with self-introspection.

Granted, I suppose any cult member believes what they’re open to believing.[15] And my trust in EA’s/rationalists has shaped my opinions on important issues. I just told my roommate that I leaned against funding gain-of-function research.[16] Until writing that sentence, I thought that was the EA/rationalist stance, but the “expert,” Anthony Fauci, currently supported gain-of-function research. I now see he hasn’t publicly stated he supports gain-of-function research since at least 2018. [17]

Most significantly, I lean towards believing the EA’s/rationalists are right that there’s at least a 1% chance that an artificial intelligence will cause human extinction over the next century!

But I don’t believe what I said about gain-of-function research or AI as much as I believe things I actually understand.

Ultimately, I think limiting some of my search queries to EA/rationalist websites is a statement about Google’s competence.[18] I believe EA’s/rationalists are generally rational and that they have similar values to me. So I’d rather search ​​google site:lesswrong.com exercise than think up a search query to help Google understand my values[19], such as efficient exercise to maximize longevity and mental health.[20]

However, while searching EA/rationalist sites is sometimes a useful heuristic[21], the rationalists have helped me appreciate how easy it is to believe the truth is convenient. If a question is important enough, I’ll do whatever it takes to find the answer.

(cross-posted from my blog: https://utilitymonster.substack.com/p/https://utilitymonster.substack.com/p/brainwashed)

  1. ^

    This post explains how I got into EA. And I found the rationalist community through EA. My impression is that most rationalists are also members of the EA community. So a lot of my trust in the EA community carried over to the rationalist community.

  2. ^

    Throughout this post, I use whichever term out of EA or rationalist that feels more appropriate. Or I use both terms.

  3. ^

    In case this would’ve been considered plagiarism, I noticed that Chapter 8 of The Scout Mindset (pg 105) starts with a similar story and uses the term “theater bug.”

  4. ^

    Although, I could’ve misinterpreted the intended spirit of those statements.

  5. ^

    It doesn’t say what Musk specifically means by Tesla would succeed. And all the comments where Musk says this are after Tesla and SpaceX have had some success (i.e., they’re worth billions). The earliest statement cited in The Scout Mindset where Musk says he thought one of them would fail is from 2014. I lean towards believing that Musk isn’t trying to appear humble. My impression is that Tesla and SpaceX both nearly went bankrupt in 2008. I imagine he thought it wouldn’t be practical to say he thought they’d fail to the public before they were successful enough.

  6. ^

    Likewise, the earliest statement I could find where Jeff Bezos said he thought Amazon had a 30% chance of success was in 1999, after Amazon was already a public company.

  7. ^

    Page 113 of The Scout Mindset may have inspired me to use this example.

  8. ^

    The die is equally likely to land on 1,2,3,4,5, and 6.You’d win on 1, one of the six possible outcomes. And if you win you earn an additional $7 million dollars. 1/6 * 7,000,000 = 1,166,666.67. And you’d lose on 2,3,4,5 and 6, five of the six possible outcomes. ⅚ * -1000000 = -833,333.33. 1,166,666.67 + -833,333.33 = $333,333.33.

  9. ^

    Once again, you have a 1/6 chance of winning the bet. In that case, you’d get 200 utility points. And you have a 5/6 chance of losing the bet and losing 100 utility points. 1/6 * 200 + 5/6 * -100 = -50.

  10. ^

    Philosophical Interlude: Imagine you are in a vacuum. A vacuum where all the utility that will ever be experienced by others depends on your actions right now. You can bet on a fair coin. If you bet, you must bet all the utility that will ever be experienced by all beings in all universes. The bet is slightly more than double or nothing. If you win, the utility won will be divvied out to make everyone equally well off. Past unhappy beings will come back to life and receive utility until they’ve become slightly happy overall. Afterward, new slightly happy beings will be born. But losing means extinction. All beings in all universes will die. No new beings will ever be created. 

    If you win, you can make the same bet again. And again. Forever. You won’t age or have any health problems while you bet. While you’re betting, all universes will be paused. Nobody will feel the happiness you win until you finish betting.

    What do you do? (Feel free to adjust the hypothetical to your moral views so you face a similar dilemma.)

    If I hadn’t added, “While you’re betting, all universes will be paused. Nobody will feel the happiness you win until you finish betting.”, I’d happily bet. But with that condition, I don’t know. The more I bet, the more likely I am to end the universe without making anyone happier. Yet the expected utility of each bet is positive.

    In practice, I haven’t found a scenario where it doesn’t make sense to maximize expected utility. I’ll watch out for real-life scenarios where payoffs are potentially delayed infinitely.

  11. ^

    Although, there are some edge cases where the test could take an infinite amount of time (e.g., testing whether the universe takes up infinite space).

  12. ^

    It’s available as an ebook too. I linked to the audio version because I thought it was well done.

  13. ^

    Galef uses the term “motivated reasoning” instead of confirmation bias. In the book (pg 6), she says they mean the same thing.

  14. ^

    I imagine The Scout Mindset isn’t the only resource which demonstrates that confirmation bias could lead someone to believe something negative about themself. But, as of May 16, 2022, I could only find one example of that from the first page of Google results when I search “confirmation bias” or “motivated reasoning.” That example is how someone who believes the world will end will only believe the end has been delayed when an apocalypse doesn’t happen. I don’t know if reading that example earlier would’ve helped me recognize scenarios like the ones about coronavirus and software engineering above. (And the world might end at some point.)

    The google results I looked at for confirmation bias are: WikipediaEncyclopedia BritannicaVeryWellMindthe abstract of this articleFarnam StreetSimplyPsychologyThe Decision LabPsychology Today, and Investopedia. For motivated reasoning, I looked at WikipediaPsychology TodayDiscover MagazineOxford BibliographiesiResearchNetForbesAPAthis paper's abstract, and this paper's summary. I didn’t watch any videos from the results.

  15. ^

    And I vaguely remember reading that con artists initially tell you stuff that’s true to earn your trust. Plus, there have been large charity scams before. Although, the entirety of EA being a scam would have to be a massive conspiracy. It’s more likely that some organizations/initiatives associated with the EA and/or rationalist communities are deemed ineffective (e.g., Raising For Effective GivingNo Lean Season, more examples here and here), or have serious issues (e.g., Leverage ResearchThe Monastic Academy). I also don’t know how I’d measure the effectiveness of many organizations focused on preventing existential risks, and I’d understand if someone felt EA nonprofits were spending too much on overhead. I’d bet some EA nonprofits (e.g., Redwood ResearchOpen Philanthropy) pay their average employee over six figures. There’s no formal definition of what makes an organization an EA/rationalist organization.

  16. ^

    The linked article’s author, Kelsey Piper, is a member of the EA/rationalist communities.

  17. ^

    He wrote an op-ed calling for gain-of-function research in 2011. And he apparently praised the lifting of the U.S. ban on gain-of-function research in 2018. I haven’t watched the video posted citing that claim. I think I had the impression Fauci clearly currently supports gain-of-function research because I didn’t notice the date on a screenshot of his 2011 op-ed in this article.

  18. ^

    And Google may be promoting the values of the Fellowship of Friends.

  19. ^

    If I just search “exercise” on Google, I get articles that state general reasons why exercise is good or exercises that are good for everyone. Here’s my first page of results: Mayo ClinicWikipediaHealthlineWebMDNHSNHS again, and Harvard Health. The only result I might go back to at some point is the Wikipedia page. It seemed fairly thorough. I didn’t look at videos, podcasts, and articles labeled as news from my results.

  20. ^

    For example, here’s my first page of article results from googling “efficient exercise to maximize longevity and mental health”: AARPTimeLongevity.TechnologyBlue ZonesMental Health FoundationMedical News TodayAndrew MerleHarvard HealthWashington PostAmherst College. My overall takeaway was that there’s a lot of conflicting advice, and no source stood out as great. And here’s a link to LessWrong posts on exercise. This post acknowledges some of the questions I have, but doesn’t answer them. And the author’s statement, “you are now as knowledgeable as any personal trainer I've spoken with,” made me feel he was overconfident. 

    I also searched site:astralcodexten.substack.com exercise. And I found this comment and this comment. They were similar to this LessWrong comment. So because I cautiously trust rationalists, and because I didn’t think anything Google showed before seemed better, I’d lean towards looking to those sources if I wanted to learn more about fitness. Not that I ever expect to have much confidence that I’m exercising optimally.

  21. ^

    There isn’t much on LessWrong about cold showers or optimal tooth brushing.

0

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since: Today at 7:16 AM

I think a very healthy epistemic habit is to not be completely caught up in the thoughtspace of any one community. Thinking is a garbage-in-garbage-out process. If all your inputs are from EA/rationalism, then you end up regurgitating EA/rationalism without adding new insights.

One of my favorite EA Forum posts has been this cause investigation for violence against women and girls, which the author mentions is inspired by their work as a healthcare worker and observing violence against women on the ground. Part of the reason this is so valuable is because it is extremely unlikely that EAs would have been exposed to this otherwise. The author combined external knowledge with EA reasoning/cost-effectiveness analysis to create something that neither the average EA nor the average healthcare worker could have created.

Deference to EA/rationalist conventional wisdom is useful in the many domains where you are not an expert. But EA did not get to be this way through parochial thinking. It got this way through new ideas continually being introduced so that good ones float to the top.

In summary: you should be able to point to a sizable number of intellectual influences that are not EA or EA-adjacent. EA is a question, not an answer, and our ability to give good answers is contingent on our continually bringing new perspectives and insights to the marketplace of ideas.

Curated and popular this week
Relevant opportunities