Summary: In the EA movement, we frequently use debates to resolve different opinions about the truth. However, debates are not always the best course for figuring out the truth. In some situations, the technique of collaborative truth-seeking may be more optimal, as this article shows.

 

Acknowledgments: Thanks to Michael Dickens, Denis Drescher, Claire Zabel, Boris Yakubchik, Pete Michaud, Szun S. Tay, Alfredo Parra, Michael Estes, Alex Weissenfels, Peter Livingstone, Jacob Bryan, Roy Wallace, Aaron Thoma, and other readers who prefer to remain anonymous for providing feedback on this post. The author takes full responsibility for all opinions expressed here and any mistakes or oversights.

 

The Problem with Debates

 

All of us in the Effective Altruism (EA) movement aim to accomplish the broad goal of doing the most good per dollar. However, we often disagree on the best methods for doing the most good.

 

When we focus on these disagreements, it can sometimes be easy to forget the goals we share. This focus on disagreements raises the danger of what Freud called the narcissism of small differences – splintering and infighting, resulting in creating out-groups. Many social movements have splintered due to such minor disagreements, and this is a danger to watch out for within the EA movement.

 

At the same time, it’s important to be able to bring our differences in opinions to light and to be able to resolve them effectively. The usual method of hashing out such disagreements in order to discover the truth within the EA movement has been through debates, in person or online.

 

Yet more often than not, people on opposing sides of a debate end up seeking to persuade rather than prioritising truth discovery - this has already been noted on the EA Forum. Indeed, research suggests that debates have a specific evolutionary function – not for discovering the truth but to ensure that our perspective prevails within a tribal social context. No wonder debates are often compared to wars.

 

We may hope that members of the EA movement, who after all share the same goal of doing the most good, would strive to discover the truth during debates. Yet given that we are not always fully rational and strategic in our social engagements, it is easy to slip up within debate mode and orient toward winning instead of uncovering the truth. Heck, I know that I sometimes forget in the midst of a heated debate that I may be the one who is wrong – I’d be surprised if this didn’t happen with you. So while we should certainly continue to engage in debates, I propose we should also use additional strategies – less natural and intuitive ones. These strategies could put us in a better mindset for updating our beliefs and improving our perspective on the truth. One such solution is a mode of engagement called collaborative truth-seeking.

 

Collaborative Truth-Seeking

 

Collaborative truth-seeking is one way of describing a more intentional approach drawn from the practice of rationality, in which two or more people with different opinions engage in a process that focuses on finding out the truth. Collaborative truth-seeking is a modality that should be used amongst people with shared goals and a shared sense of trust.

 

Some important features of collaborative truth-seeking, which are often not present in debates, are: focusing on a desire to change one’s own mind toward the truth; a curious attitude; being sensitive to others’ emotions; striving to avoid arousing emotions that will hinder updating beliefs and truth discovery; and a trust that all other participants are doing the same. These can contribute to increased  social sensitivity, which, together with other attributes, correlate with accomplishing higher group performance on a variety of activities, such as figuring out the truth, making decisions, etc. 

 

The process of collaborative truth-seeking starts with establishing trust, which will help increase social sensitivity, lower barriers to updating beliefs, increase willingness to be vulnerable, and calm emotional arousal. The following techniques are helpful for establishing trust in collaborative truth-seeking:

  • Share weaknesses and uncertainties in your own position

  • Share your biases about your position

  • Share your social context and background as relevant to the discussion

    • For instance, I grew up poor once my family immigrated to the US when I was 10, and this naturally influences me to care about poverty more than some other issues

  • Vocalize curiosity and the desire to learn

  • Ask the other person to call you out if they think you're getting emotional or engaging in emotive debate instead of collaborative truth-seeking, and consider using a safe word

 

Here are additional techniques that can help you stay in collaborative truth-seeking mode after establishing trust:

  • Self-signal: signal to yourself that you want to engage in collaborative truth-seeking, instead of debating

  • Empathize: try to empathize with the other perspective that you do not hold by considering where their viewpoint came from, why they think what they do, and recognizing that they feel that their viewpoint is correct

  • Keep calm: be prepared with emotional management to calm your emotions and those of the people you engage with when a desire for debate arises

    • watch out for defensiveness and aggressiveness in particular

  • Go slow: take the time to listen fully and think fully

  • Consider pausing: have an escape route for complex thoughts and emotions if you can’t deal with them in the moment by pausing and picking up the discussion later

    • say “I will take some time to think about this,” and/or write things down

  • Echo: paraphrase the other person’s position to indicate and check whether you’ve fully understood their thoughts

  • Support your collaborators: orient toward improving the other person’s points to argue against their strongest form

  • Stay the course: be passionate about wanting to update your beliefs, maintain the most truthful perspective, and adopt the best evidence and arguments, no matter if they are yours of those of others

  • Be diplomatic: when you think the other person is wrong, strive to avoid saying "you're wrong because of X" but instead to use questions, such as "what do you think X implies about your argument?"

  • Be specific and concrete: go down levels of abstraction

  • Be clear: make sure the semantics are clear to all by defining terms

  • Be probabilistic: use probabilistic thinking and probabilistic language, to help get at the extent of disagreement and be as specific and concrete as possible

    • For instance, avoid saying that X is absolutely true, but say that you think there's an 80% chance it's the true position

    • Consider adding what evidence and reasoning led you to believe so, for both you and the other participants to examine this chain of thought

  • When people whose perspective you respect fail to update their beliefs in response to your clear chain of reasoning and evidence, update a little somewhat toward their position, since that presents evidence that your position is not very convincing

  • Confirm your sources: look up information when it's possible to do so (Google is your friend)

  • Charity mode: strive to be more charitable to others and their expertise than seems intuitive to you

  • Use the reversal test to check for status quo bias

    • If you are discussing whether to change some specific numeric parameter - say increase by 50% the money donated to charity X - state the reverse of your positions, for example decreasing the amount of money donated to charity X by 33%, and see how that impacts your perspective

  • Use CFAR’s double crux technique

    • In this technique, two parties who hold different positions on an argument each writes the the fundamental reason for their position (the crux of their position). This reason has to be the key one, so if it was proven incorrect, then each would change their perspective. Then, look for experiments that can test the crux. Repeat as needed. If a person identifies more than one reason as crucial, you can go through each as needed. More details are here.  

 

Of course, not all of these techniques are necessary for high-quality collaborative truth-seeking. Some are easier than others, and different techniques apply better to different kinds of truth-seeking discussions. You can apply some of these techniques during debates as well, such as double crux and the reversal test. Try some out and see how they work for you.

 

Conclusion

 

Engaging in collaborative truth-seeking goes against our natural impulses to win in a debate, and is thus more cognitively costly. It also tends to take more time and effort than just debating. It is also easy to slip into debate mode even when using collaborative truth-seeking, because of the intuitive nature of debate mode.

 

Moreover, collaborative truth-seeking need not replace debates at all times. This non-intuitive mode of engagement can be chosen when discussing issues that relate to deeply-held beliefs and/or ones that risk emotional triggering for the people involved. Because of my own background, I would prefer to discuss poverty in collaborative truth-seeking mode rather than debate mode, for example. On such issues, collaborative truth-seeking can provide a shortcut to resolution, in comparison to protracted, tiring, and emotionally challenging debates. Likewise, using collaborative truth-seeking to resolve differing opinions on all issues holds the danger of creating a community oriented excessively toward sensitivity to the perspectives of others, which might result in important issues not being discussed candidly. After all, research shows the importance of having disagreement in order to make wise decisions and to figure out the truth. Of course, collaborative truth-seeking is well suited to expressing disagreements in a sensitive way, so if used appropriately, it might permit even people with triggers around certain topics to express their opinions.

 

Taking these caveats into consideration, collaborative truth-seeking is a great tool to use to discover the truth and to update our beliefs, as it can get past the high emotional barriers to altering our perspectives that have been put up by evolution. Since we all share the same goal, EA venues are natural places to try out collaborative truth-seeking to answer one of the most important questions of all – how we can do good most effectively.

Comments16


Sorted by Click to highlight new comments since:

Great article!

One thing I noticed yesterday is that EA discussions are often well suited for leaving oneself a line of retreat. If you know what the horse gait called the pace looks like, then it’s almost the same, only conceptually: All your left feet are your personal qualities and motivations, and all your right feet are your epistemic beliefs.

(When I use words like “admit,” I mean it from the perspective of the actor in the following examples. I don’t mean to imply that it’s right for them to update, just that it’s rational for them to update given the information they have at the time. See also this question and answer for the distinction.)

A rationalist who loves meat too much can either brutalize their worldview to make themselves believe that the probability that animals can suffer is negligible (a standstill) or can admit that they act morally inconsistent but that it would take them much more willpower than others to change it (and maybe they can cut out chicken and eggs, and offset the rest). They’ve put their right feet forward. Now they are less afraid of talking with vegans about veganism, and so get introduced to some great fortified veggie meats, so that, a few months later, they can also put their left feet forward more easily.

Or an animal rights activist who is very invested in the movement learns about AI risks and runs out of arguments why AR values spreading should be more important than friendly AI research. They can either ridicule AI researchers for their weirdness (the standstill), or admit that the cause area is the more cost-effective but that they’re personally so specifically skilled and highly motivated for AR that they have a much greater fit for AR, so that someone with a comparative advantage for AI research can take that position. They’ve put their right feet forward. Being less afraid of talking with AI researchers, they can now also personally warm up to the cause (putting the left legs forward) and influence the researchers to care more about nonhuman animals and thus increase the chances that a future superintelligent AI will.

I like the addition of the more complex way of thinking about the line of retreat. I didn't go into this in the article, but indeed, leaving a line of retreat permits a series of iterating collaborative truth-seeking conversations, so as to update beliefs incrementally.

Here's another potentially helpful frame you may might want to add to the list of collaborative truth-seeking techniques:

  • 'Lose' to Win: Aim to change your own mind, not the other's (within the constraints of rationality/ logic of course). You gain more from the process the more you manage to update your beliefs, with the thankworthy support from your truth-seeking collaborator. Because, assuming hygienic epistemology in the process, your changed beliefs will be based on more – valuable – data/ ideas. (As a bonus, this will make your collaborator happy and improve the bond between the two of you.) You can put this into practice through the technique of steelmanning.

(Steelmanning is already on the technique list:

Be open: orient toward improving the other person’s points to argue against their strongest form

)

I like the Lose to Win notion - it's captured a bit in the phrase in the article "adopt the best evidence and arguments, no matter if they are yours of those of others" but the "Lose To Win" does so better. Thanks!

(“Um, actually …” ;) – tiny math nitpick: The reversal (test) of increasing by 50 % is decreasing by 33 %:
1000 × (1 + 50 %) = 1000 × 150 % = 1500
1500 × (1 – 33 %) = 1500 × 67 % = 1000
)

You're right, thanks for catching that! Will edit.

A useful technique for when consensus is worth a little mental fatigue. Interested in trying this with friends of differing political modes of thought.

Would be interested in hearing about the outcomes of your discussions.

Here are additional techniques that can help you stay in collaborative truth-seeking mode after establishing trust:

Might be worth adding scout mindset to this list :)

Good call!

A suggestion° for after reading the article: Asking yourself:
Where and how can you apply this to your life? With which people, in which situations, on what topics does communication tend get duel-y?

You can make trigger-action plans for those situations:
[If I'm talking with my mum and the concepts science and/or esoterism come up] → [then become very mindful and careful re her and my emotions, tone etc. – strive for collaborative truth-seeking instead of arguing.]

° Very much in the spirit of Brienne's cognitive trigger-action plan

[If something feels key to advancing your art as a rationalist] → [stop, drop, and trigger-action plan.]

Thanks for posting this. I found this to be a helpful guide to collaborative truth-seeking and I especially appreciate the links for further information.

Glad it's helpful!

Yeah, great article Gleb, very useful topic! Thanks!!

As a counterexample to "Engaging in collaborative truth-seeking goes against our natural impulses to win in a debate, and is thus more cognitively costly": collaborative truth-seeking as described here is more intuitive and natural to me personally than debating.

Cool, glad that it's personally more intuitive and natural to you! You're one of the luckier ones :-)

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would