Collaborative Truth-Seeking

by Gleb_T4th May 201616 comments

15

Discussion norms
Frontpage

Summary: In the EA movement, we frequently use debates to resolve different opinions about the truth. However, debates are not always the best course for figuring out the truth. In some situations, the technique of collaborative truth-seeking may be more optimal, as this article shows.

 

Acknowledgments: Thanks to Michael Dickens, Denis Drescher, Claire Zabel, Boris Yakubchik, Pete Michaud, Szun S. Tay, Alfredo Parra, Michael Estes, Alex Weissenfels, Peter Livingstone, Jacob Bryan, Roy Wallace, Aaron Thoma, and other readers who prefer to remain anonymous for providing feedback on this post. The author takes full responsibility for all opinions expressed here and any mistakes or oversights.

 

The Problem with Debates

 

All of us in the Effective Altruism (EA) movement aim to accomplish the broad goal of doing the most good per dollar. However, we often disagree on the best methods for doing the most good.

 

When we focus on these disagreements, it can sometimes be easy to forget the goals we share. This focus on disagreements raises the danger of what Freud called the narcissism of small differences – splintering and infighting, resulting in creating out-groups. Many social movements have splintered due to such minor disagreements, and this is a danger to watch out for within the EA movement.

 

At the same time, it’s important to be able to bring our differences in opinions to light and to be able to resolve them effectively. The usual method of hashing out such disagreements in order to discover the truth within the EA movement has been through debates, in person or online.

 

Yet more often than not, people on opposing sides of a debate end up seeking to persuade rather than prioritising truth discovery - this has already been noted on the EA Forum. Indeed, research suggests that debates have a specific evolutionary function – not for discovering the truth but to ensure that our perspective prevails within a tribal social context. No wonder debates are often compared to wars.

 

We may hope that members of the EA movement, who after all share the same goal of doing the most good, would strive to discover the truth during debates. Yet given that we are not always fully rational and strategic in our social engagements, it is easy to slip up within debate mode and orient toward winning instead of uncovering the truth. Heck, I know that I sometimes forget in the midst of a heated debate that I may be the one who is wrong – I’d be surprised if this didn’t happen with you. So while we should certainly continue to engage in debates, I propose we should also use additional strategies – less natural and intuitive ones. These strategies could put us in a better mindset for updating our beliefs and improving our perspective on the truth. One such solution is a mode of engagement called collaborative truth-seeking.

 

Collaborative Truth-Seeking

 

Collaborative truth-seeking is one way of describing a more intentional approach drawn from the practice of rationality, in which two or more people with different opinions engage in a process that focuses on finding out the truth. Collaborative truth-seeking is a modality that should be used amongst people with shared goals and a shared sense of trust.

 

Some important features of collaborative truth-seeking, which are often not present in debates, are: focusing on a desire to change one’s own mind toward the truth; a curious attitude; being sensitive to others’ emotions; striving to avoid arousing emotions that will hinder updating beliefs and truth discovery; and a trust that all other participants are doing the same. These can contribute to increased  social sensitivity, which, together with other attributes, correlate with accomplishing higher group performance on a variety of activities, such as figuring out the truth, making decisions, etc. 

 

The process of collaborative truth-seeking starts with establishing trust, which will help increase social sensitivity, lower barriers to updating beliefs, increase willingness to be vulnerable, and calm emotional arousal. The following techniques are helpful for establishing trust in collaborative truth-seeking:

  • Share weaknesses and uncertainties in your own position

  • Share your biases about your position

  • Share your social context and background as relevant to the discussion

    • For instance, I grew up poor once my family immigrated to the US when I was 10, and this naturally influences me to care about poverty more than some other issues

  • Vocalize curiosity and the desire to learn

  • Ask the other person to call you out if they think you're getting emotional or engaging in emotive debate instead of collaborative truth-seeking, and consider using a safe word

 

Here are additional techniques that can help you stay in collaborative truth-seeking mode after establishing trust:

  • Self-signal: signal to yourself that you want to engage in collaborative truth-seeking, instead of debating

  • Empathize: try to empathize with the other perspective that you do not hold by considering where their viewpoint came from, why they think what they do, and recognizing that they feel that their viewpoint is correct

  • Keep calm: be prepared with emotional management to calm your emotions and those of the people you engage with when a desire for debate arises

    • watch out for defensiveness and aggressiveness in particular

  • Go slow: take the time to listen fully and think fully

  • Consider pausing: have an escape route for complex thoughts and emotions if you can’t deal with them in the moment by pausing and picking up the discussion later

    • say “I will take some time to think about this,” and/or write things down

  • Echo: paraphrase the other person’s position to indicate and check whether you’ve fully understood their thoughts

  • Support your collaborators: orient toward improving the other person’s points to argue against their strongest form

  • Stay the course: be passionate about wanting to update your beliefs, maintain the most truthful perspective, and adopt the best evidence and arguments, no matter if they are yours of those of others

  • Be diplomatic: when you think the other person is wrong, strive to avoid saying "you're wrong because of X" but instead to use questions, such as "what do you think X implies about your argument?"

  • Be specific and concrete: go down levels of abstraction

  • Be clear: make sure the semantics are clear to all by defining terms

  • Be probabilistic: use probabilistic thinking and probabilistic language, to help get at the extent of disagreement and be as specific and concrete as possible

    • For instance, avoid saying that X is absolutely true, but say that you think there's an 80% chance it's the true position

    • Consider adding what evidence and reasoning led you to believe so, for both you and the other participants to examine this chain of thought

  • When people whose perspective you respect fail to update their beliefs in response to your clear chain of reasoning and evidence, update a little somewhat toward their position, since that presents evidence that your position is not very convincing

  • Confirm your sources: look up information when it's possible to do so (Google is your friend)

  • Charity mode: strive to be more charitable to others and their expertise than seems intuitive to you

  • Use the reversal test to check for status quo bias

    • If you are discussing whether to change some specific numeric parameter - say increase by 50% the money donated to charity X - state the reverse of your positions, for example decreasing the amount of money donated to charity X by 33%, and see how that impacts your perspective

  • Use CFAR’s double crux technique

    • In this technique, two parties who hold different positions on an argument each writes the the fundamental reason for their position (the crux of their position). This reason has to be the key one, so if it was proven incorrect, then each would change their perspective. Then, look for experiments that can test the crux. Repeat as needed. If a person identifies more than one reason as crucial, you can go through each as needed. More details are here.  

 

Of course, not all of these techniques are necessary for high-quality collaborative truth-seeking. Some are easier than others, and different techniques apply better to different kinds of truth-seeking discussions. You can apply some of these techniques during debates as well, such as double crux and the reversal test. Try some out and see how they work for you.

 

Conclusion

 

Engaging in collaborative truth-seeking goes against our natural impulses to win in a debate, and is thus more cognitively costly. It also tends to take more time and effort than just debating. It is also easy to slip into debate mode even when using collaborative truth-seeking, because of the intuitive nature of debate mode.

 

Moreover, collaborative truth-seeking need not replace debates at all times. This non-intuitive mode of engagement can be chosen when discussing issues that relate to deeply-held beliefs and/or ones that risk emotional triggering for the people involved. Because of my own background, I would prefer to discuss poverty in collaborative truth-seeking mode rather than debate mode, for example. On such issues, collaborative truth-seeking can provide a shortcut to resolution, in comparison to protracted, tiring, and emotionally challenging debates. Likewise, using collaborative truth-seeking to resolve differing opinions on all issues holds the danger of creating a community oriented excessively toward sensitivity to the perspectives of others, which might result in important issues not being discussed candidly. After all, research shows the importance of having disagreement in order to make wise decisions and to figure out the truth. Of course, collaborative truth-seeking is well suited to expressing disagreements in a sensitive way, so if used appropriately, it might permit even people with triggers around certain topics to express their opinions.

 

Taking these caveats into consideration, collaborative truth-seeking is a great tool to use to discover the truth and to update our beliefs, as it can get past the high emotional barriers to altering our perspectives that have been put up by evolution. Since we all share the same goal, EA venues are natural places to try out collaborative truth-seeking to answer one of the most important questions of all – how we can do good most effectively.

16 comments, sorted by Highlighting new comments since Today at 7:13 PM
New Comment

Great article!

One thing I noticed yesterday is that EA discussions are often well suited for leaving oneself a line of retreat. If you know what the horse gait called the pace looks like, then it’s almost the same, only conceptually: All your left feet are your personal qualities and motivations, and all your right feet are your epistemic beliefs.

(When I use words like “admit,” I mean it from the perspective of the actor in the following examples. I don’t mean to imply that it’s right for them to update, just that it’s rational for them to update given the information they have at the time. See also this question and answer for the distinction.)

A rationalist who loves meat too much can either brutalize their worldview to make themselves believe that the probability that animals can suffer is negligible (a standstill) or can admit that they act morally inconsistent but that it would take them much more willpower than others to change it (and maybe they can cut out chicken and eggs, and offset the rest). They’ve put their right feet forward. Now they are less afraid of talking with vegans about veganism, and so get introduced to some great fortified veggie meats, so that, a few months later, they can also put their left feet forward more easily.

Or an animal rights activist who is very invested in the movement learns about AI risks and runs out of arguments why AR values spreading should be more important than friendly AI research. They can either ridicule AI researchers for their weirdness (the standstill), or admit that the cause area is the more cost-effective but that they’re personally so specifically skilled and highly motivated for AR that they have a much greater fit for AR, so that someone with a comparative advantage for AI research can take that position. They’ve put their right feet forward. Being less afraid of talking with AI researchers, they can now also personally warm up to the cause (putting the left legs forward) and influence the researchers to care more about nonhuman animals and thus increase the chances that a future superintelligent AI will.

I like the addition of the more complex way of thinking about the line of retreat. I didn't go into this in the article, but indeed, leaving a line of retreat permits a series of iterating collaborative truth-seeking conversations, so as to update beliefs incrementally.

Here's another potentially helpful frame you may might want to add to the list of collaborative truth-seeking techniques:

  • 'Lose' to Win: Aim to change your own mind, not the other's (within the constraints of rationality/ logic of course). You gain more from the process the more you manage to update your beliefs, with the thankworthy support from your truth-seeking collaborator. Because, assuming hygienic epistemology in the process, your changed beliefs will be based on more – valuable – data/ ideas. (As a bonus, this will make your collaborator happy and improve the bond between the two of you.) You can put this into practice through the technique of steelmanning.

(Steelmanning is already on the technique list:

Be open: orient toward improving the other person’s points to argue against their strongest form

)

I like the Lose to Win notion - it's captured a bit in the phrase in the article "adopt the best evidence and arguments, no matter if they are yours of those of others" but the "Lose To Win" does so better. Thanks!

(“Um, actually …” ;) – tiny math nitpick: The reversal (test) of increasing by 50 % is decreasing by 33 %:
1000 × (1 + 50 %) = 1000 × 150 % = 1500
1500 × (1 – 33 %) = 1500 × 67 % = 1000
)

You're right, thanks for catching that! Will edit.

A useful technique for when consensus is worth a little mental fatigue. Interested in trying this with friends of differing political modes of thought.

Would be interested in hearing about the outcomes of your discussions.

Here are additional techniques that can help you stay in collaborative truth-seeking mode after establishing trust:

Might be worth adding scout mindset to this list :)

Good call!

A suggestion° for after reading the article: Asking yourself:
Where and how can you apply this to your life? With which people, in which situations, on what topics does communication tend get duel-y?

You can make trigger-action plans for those situations:
[If I'm talking with my mum and the concepts science and/or esoterism come up] → [then become very mindful and careful re her and my emotions, tone etc. – strive for collaborative truth-seeking instead of arguing.]

° Very much in the spirit of Brienne's cognitive trigger-action plan

[If something feels key to advancing your art as a rationalist] → [stop, drop, and trigger-action plan.]

Thanks for posting this. I found this to be a helpful guide to collaborative truth-seeking and I especially appreciate the links for further information.

Glad it's helpful!

Yeah, great article Gleb, very useful topic! Thanks!!

As a counterexample to "Engaging in collaborative truth-seeking goes against our natural impulses to win in a debate, and is thus more cognitively costly": collaborative truth-seeking as described here is more intuitive and natural to me personally than debating.

Cool, glad that it's personally more intuitive and natural to you! You're one of the luckier ones :-)