My Experience with My Debate Policy

How guaranteeing debates saves time.

Table of Contents

Summary: I explain the context for why I have a public debate policy – both philosophical reasons and experiences. I talk about fallibilism, rationality and error correction. I discuss how, counter-intuitively, my debate policy saves time and energy for me. I suggest that others create debate policies. This follows my previous article, Fallibilism, Bias, and the Rule of Law, which argued in favor of using written rationality policies (which has similar merits to the rule of law).

Fallibilism

I’m a strong fallibilist. I think it’s common to make mistakes without realizing you’re mistaken. And there’s no way to get a guarantee that you haven’t made a mistake about something. We can never correctly have 100% certainty about anything (including fallibilism).

I’m aware of certain symmetries in critical discussions or debates. If I disagree with John, and John disagrees with me, that’s symmetric. Nothing so far indicates that I’m right. If I think John is being an idiot about this issue, and John thinks I’m being an idiot about this issue, that’s again symmetric. Neither of us should conclude that we’re right and the other guy is an idiot; me concluding that I’m the idiot would make equal sense; actually we should both conclude that the situation so far is inconclusive.

Asymmetry

To conclude that I’m right and John is wrong about some ideas, I need to be able to point out some kind of objective asymmetry (and argue that it’s good). What can I say that John can’t mirror?

For example, I might say “Capitalism is great because it allows true freedom.” But John could reply “Socialism is great because it allows true freedom.” That’s symmetric. Both capitalists and socialists can make an unexplained, unargued assertion about how great their system is regarding freedom. So far, based on what we’ve said, no difference between capitalism and socialism has been established. If I still think capitalism is better for freedom, it might be due to background knowledge I haven’t yet communicated.

Trying again, I might continue, “Capitalism gives CEOs the freedom to do whatever they want.” A symmetric reply from John would be “Socialism gives CEOs the freedom to do whatever they want.” But that’s false. It doesn’t. So do we have an objective asymmetry here? Not yet. Capitalism doesn’t actually let CEOs do whatever they want either – e.g. it doesn’t allow hiring hitmen to assassinate rival CEOs. This time, although John couldn’t mirror my statement to advocate for his side, I still didn’t establish an objective asymmetry because my statement was false.

Trying again, I might say “Capitalism gives company leadership the freedom to price their products however they want.” John can’t correctly mirror that by claiming socialism gives company leadership full pricing freedom because it doesn’t. Socialism involves central planners (or the community as a whole, or some other variant) having some control over prices. Now we have an actual difference between capitalism and socialism. Next, we could consider whether this difference is good or bad. To reach a conclusion favoring something over something else, you have to establish a difference between them and discuss whether it’s a positive difference. (You should also consider other issues like whether there are some more important differences.)

So I try to be aware of symmetry and avoid concluding that I’m right when the claims made are symmetrical. Breaking symmetries is fairly hard and requires thoughtful arguments to do well. I also try to consider if a conversation partner could make a symmetrical claim that would be reasonable (or about as reasonable as my own claim or better), in which case I’ll take that into account even if he doesn’t say it.

I believe that I may be wrong, and my conversation partner may be right, and by an effort we may get closer to the truth (that’s a Karl Popper paraphrase). This fallibilist attitude has led me to be interested in discussion and debate, and to be curious about other people’s reasoning when they believe I’m mistaken.

If someone thinks I’m wrong, I’d like to rationally resolve that disagreement. If they don’t want to, that’s OK, and that’s an asymmetry: I believe X and I’m willing to argue my case; they believe X is wrong but they’re not willing to argue their case. Stopping with that asymmetry seems fine to me. It’s not ideal but the problem isn’t my fault.

Written Criticism

When people claim to know I’m wrong about X, I often ask if they know of a criticism of X that I could read. Has anyone ever written down why X is wrong? If they don’t think a refutation of X has ever been written down, then it’s more disappointing if they are unwilling to share their refutation, since they’re claiming it’s a novel contribution to human knowledge that would help me. But it’s still their choice. And actually, if no one has ever written down their reasoning, and they also don’t want to share it, then I’m doubtful that it’s very good, so I’m not very disappointed unless I have some additional reason to think they have a great, unshared point. (If they’ve been working on it for months, and are in the final stages of research or are already writing it up, but they aren’t ready to publish yet, that would be fine and I wouldn’t suspect their point is bad. But that also means I’ll get to read their argument in the reasonably near future.)

In my experience, reasonable people get the point right away that if there’s no where I can go read why they’re right, and they don’t tell me, then I shouldn’t change my mind, and their side of the debate isn’t persuasive. Meanwhile, if I’m sharing my ideas, then my ideas may be publicly persuasive, unlike the counter-arguments that aren’t public. So they’re conceding the public debate to me, and neutral people looking at the current debate should agree with me over them (since my side has arguments and theirs doesn’t) unless they have some other, better knowledge. There’s a clear asymmetry when I make public arguments, but no one makes public counter-arguments. It’s a case where even a strong fallibilist, who strives for objectivity, can easily take sides (tentatively – if anyone shares new arguments later then it’s appropriate to reconsider).

Intuition

People sometimes intuitively think I’m wrong but don’t know how to express their case in words and make good arguments. In that case, there’s an asymmetry because I have arguments in a spoken language and they don’t. If they’re right, it’s hard for me to learn from them and change my mind because they can’t communicate their knowledge to me in words.

In this scenario, many people would be dismissive and say e.g., “Well, let me know if you ever figure out what your point is in words and I’ll consider it then. Otherwise, I guess this debate is done.” I, instead, have developed some techniques for including and discussing inexplicit intuitions in explicit debates. So, if we wanted to, we could actually continue the discussion using those techniques and still try to reach mutual agreement. If someone has intuitive knowledge which they can’t express in words, I don’t think that’s adequate to conclude that their knowledge is incorrect or inaccessible. We don’t have to give up that easily. Rather than dismissing intuitive ideas, we can use explicit processes to understand them better.

A lot of people interested in rationality are pretty dismissive of intuition, and I think that ends up bullying people who aren’t as good at rhetoric, debate and explicit communication. Most people don’t want to engage with the best explicit debaters because they’ll have a bad time. Instead of being helped to express their ideas, they’ll be dismissed when they struggle. I think that’s a tragedy. Besides being mean, it means the majority of people tend to avoid trying to share their knowledge in debates or critical discussions. Some of those people do have important knowledge which they’re being discouraged from sharing.

Participating in Low Quality Discussions

In the past, if I thought someone was making low quality arguments, I tended to give them second, third, fourth, fifth, sixth and further chances. Why? Because I’m super conscientious and didn’t want to dismiss them if they were still claiming that I’m wrong and they know it. I didn’t want to risk that I’m blind, biased or whatever and then, due to my bias, I refuse to listen to good arguments. And even if their first five arguments are bad, their sixth might be good. As long as they make new arguments, it’s problematic to dismiss that based on a trait of the speaker (his past errors). How can you ignore a new idea just because its author has had incorrect ideas in the past?

So some of my debates went too long and ended when the other person decided that he didn’t want to talk anymore. That gave me a clear asymmetry to legitimize stopping. It let me believe (correctly) that I was open to further debate and they weren’t. It let me believe (correctly) that they were taking a risk of avoidably staying wrong, while I wasn’t choosing that.

But it took too much effort. That was less of a problem at first because I was less experienced at critical discussion and debate. So getting more experience in discussions had value to me, even if the quality of the other person’s arguments was poor. However, over time, the mistakes some people made became more and more repetitive instead of interesting, and it became rarer for them to make an argument that was new to me (even a bad one). So I started getting less value for my time spent.

What could I do to reduce the time I spent on low quality discussions for the sake of being open to debate? I didn’t want to simply refuse to talk to anyone I formed a negative judgment of. I didn’t want to think someone seemed dumb, boring, low quality or not worth my time and then ignore them based on that impression. That kind of policy is widespread and I think it’s one of the world’s larger problems. People commonly form incorrect negative judgments of critics and then are unreasonably dismissive, and it prevents them from receiving valuable corrections. Lots of public intellectuals and thought leaders stay wrong about important issues, and spread errors publicly in books, speeches and articles, while refusing to consider counter-arguments. I really, really, really don’t want to be like that, even a tiny bit.

So what could I do to end some discussions earlier which is compatible with my strong fallibilism? I don’t want to just trust my judgment about who doesn’t have a good point, which arguments aren’t worth considering, etc. I want to plan around the possibility that I’m sometimes biased, irrational, wrong, etc.

My Debate Policy Saves Time and Energy

I created a debate policy. It offers written, conditional guarantees, to the general public, about my openness to debate. If anyone thinks I’m ignoring them when I shouldn’t be, or opting out of a debate I should participate in, they can invoke my debate policy. If I were to violate my policy, I would expect it to harm my reputation.

You might expect having an open, publicly-available debate policy would result in me spending more time debating. It does not. There are two main reasons for this.

First, the debate policy puts certain requirements on the debate. People have to meet some conditions. I tried to design the conditions to be limited, minimalist, and achievable by anyone who has a valuable correction for me. The conditions include agreeing to use my preferred debate structure, which imposes stopping conditions on both of us (certain actions must be taken before leaving the debate).

I’ve found that the people I thought were unserious, who I believed made low quality arguments, do not want to agree to any kind of formal debate rules. They self-select out of that. They opt out. While abuse of my policy is possible, it hasn’t happened, and it has some anti-abuse mechanisms built in.

Second, my debate policy enables me to judge discussions as low value and then stop responding with no explanation or with a link to the debate policy. I’m much more comfortable ending discussions than before because, if my judgment is mistaken, there’s a backup plan.

Suppose I have a 99% accuracy when I judge that someone is making bad arguments. In the past, I thought “I want to learn from the 1% of cases where I’m wrong, so I better keep discussing.” Now I think, “If I’m wrong, they can use my debate policy, so there’s still a reasonable way for me to be corrected. So it’s OK to end this discussion.”

Having a failsafe mechanism lets me be far more aggressive about opting out of discussions. Before I had a failsafe, I was super conservative. But now I’ve moved some of my fallibilist conservatism into the failsafe, and gotten it out of other conversations that don’t use my debate policy.

What if My Debate Policy Has an Error?

What if I incorrectly opt out of a discussion and also my debate policy has a flaw? As a strong fallibilist, this is the kind of issue I take seriously. I want to have a plan for that too. What is the backup plan for my debate policy?

I don’t think I need an infinite chain of failsafe mechanisms, but I think having several failsafes is better than one. Variety helps here because I don’t want every failsafe to contain the same flaw.

The primary backup plan for my debate policy is my Paths Forward policy, which I actually developed first. When I added the debate policy, instead of making the Paths Forward policy obsolete, I instead specified that it can still be used but only if the debate policy is failing in some way. The Paths Forward policy is more of a broad, generic opportunity for error correction. It’s less safe against abuse or demands on my time, but it’s more conservative and safe against me potentially making a mistake and not listening to good ideas. So it’s good as a secondary failsafe.

I also have a separate backup plan, which is discussing my debate policy, debate methodology or Paths Forward policy. I’m extra open to discussions on those specific topics, and I’m willing to attempt to discuss them using standard conversational practices in our culture rather than my own preferred discussion methods. I stick much closer to my old conservatism for just those topics. I don’t mind this because I consider them particularly interesting and important topics which I’d actually like to have more discussions about. If you have suggestions for good debate policies or methodologies, or criticism of mine, I especially want to hear it. Overall, being really conservative about avoiding ending discussions on just three topics, instead of all topics, is a big time saver. Plus, it’s uncommon that anyone wants to talk about discussion methodology. I try to bring that topic up sometimes but I find that most people decline to discuss it.

I’m also open to considering and trying out other discussion or debate methodologies if anyone claims to know of a good, rational methodology that is written down. Although my policy doesn’t guarantee trying any methodology regardless of what it says, this is something I’m especially interested in and flexible about. If someone won’t discuss using my methodology and also won’t suggest a methodology they claim is better, and I’m unimpressed by what they say, then I think it’s definitely reasonable and fallibilism-compatible to stop speaking with them.

Prioritizing

In the past, I didn’t prioritize much in discussions. Karl Popper and David Deutsch (fallibilists and advocates of evolutionary epistemology) taught me that we learn and make progress by correcting errors. They underemphasized that some errors matter more than others. The general impression they give is basically that you should correct all the errors that you can find. Finding errors is considered a high value activity and one of the two keys to progress (the second key is correcting the errors you find).

Common sense says to prioritize, but I didn’t find that convincing. It doesn’t adequately explain rational prioritization or why not to correct all the errors (the main reason given is that correcting all the errors would take too long, but I was willing to put a large amount of effort into epistemology in order to try to have better knowledge about it). Doesn’t every error matter?

One answer is that some people are making too many errors to deal with right now. They’re making an overwhelming number of errors and they need to focus their limited time, energy and attention on only some errors. They can’t do everything at once. This fits with the standard view pretty well, but no one said it to me. I figured it out eventually. I guess people didn’t want to admit to making so many errors. By pretending they were making a manageable number of errors, they fooled me into thinking every criticism would be useful to them. Once I started seeing most people as making an unmanageable amount of errors, I started prioritizing a lot more for the criticism I shared. I also tried telling them about what I think their situation is, but I got a lot of denials in response. Some people explicitly keep asking me to share every error I see with them, but if I do that they will (predictably to me) be overwhelmed and have a bad time. Oh well. I’m going to follow my best judgment about what to do (which is to prioritize criticism I share), and if they think I’m wrong and genuinely want more thorough criticism from me, they can invoke my debate policy. If they don’t invoke my debate policy, that signals to me that they aren’t really that serious about wanting to hear all the criticism I can come up with.

I also learned, with the help of Eliyahu Goldratt, a better perspective on prioritizing. This changed my mind more than practical considerations about people being busy or overwhelmed. In brief summary, optimizing non-bottlenecks doesn’t increase throughput. It’s important to identify constraints in a system – limiting factors – and then improve those. Most parts of the system have excess capacity already so improving them isn’t useful. In more philosophical terms, most errors won’t cause failure at current, active goals we care about. Instead, most errors mean we have a little less excess capacity at something but we can succeed without fixing the error. (A different way to frame it is that most “errors” aren’t really errors because they won’t cause failure at a goal we’re pursuing; less excess capacity isn’t actually an error.)

As a programmer, I already had experience with this. When you want to speed up slow software, you look for bottlenecks. The slowness isn’t divided equally or randomly around the code. Almost all the time use is in a small number of code paths. So you speed those up and then either you’re done or there are some new slowest places that you work on next. If you speed up code without prioritizing, it’s very unlikely to be effective. Most code paths are already fast enough (they have excess capacity) so optimizing them isn’t useful.

A Debate Policy Might Cost Time and Energy for You

For some people, having a debate policy would increase rather than decrease the time they spend debating. Why? Because their current policies differ from mine. Instead of being very strongly conservative about ending discussions, due to fallibilism, they currently trust their judgment more and end (or don’t begin) discussions more liberally. I think they would benefit from a debate policy because they’re probably mistaken about some of the times they don’t discuss or end a discussion. Also (as long as you aren’t famous) you might find that people don’t use your debate policy very often. When people do use your debate policy, you may find you like the debates because they meet conditions of your choice that you specified in the policy. Debates that meet your written conditions may be higher quality, on average, than the debates have now. (By the way, you can also make a discussion policy for non-debate discussions.)

If you are famous and would get too many debate requests, or actually just want to have fewer debates for another reason, there are ways to put harder-to-meet conditions on your debate policy (that are still rational) instead of just giving up on having a policy. There are potential solutions (there’s more information in the resources I link at the end).

In general, it can be hard to tell the difference between positive and negative outliers. Both look badly wrong and don’t make sense from your current perspective. Outliers can be people who are very different than you or ideas which are very different than your ideas. Positive outliers are the most valuable people to talk with or ideas to engage with. Dismissing people (or ideas) who seem wrong or counter-intuitive, while having no debate policy, means you’re likely to be dismissive of some positive outliers. Even if you try not to do that, what exactly is going to stop it from happening, besides a debate policy? Trying to recognize when something might be a positive outlier is not a reliable method; that’s a form of trusting yourself instead of planning around you sometimes being biased or failing at rationality. (I explain and criticize trying and trusting more in Fallibilism, Bias, and the Rule of Law. That article advocates pre-commitment to written rationality policies, which it compares with the rule of law.)

Debate policies are an example of pre-committing to something, in writing. People tend to avoid this because “What if I commit to it and then, when the time comes, I don’t want to do it?” But that’s kind of the point: it’s good to do anyway even if you have biases, intuitions or irrationalities that are resisting it. You can’t trust your judgment when the time comes and you think “I want to get out of this debate for rational, good reasons, not due to bias.” It might actually be bias. In the case where your subconscious is resisting engaging in rational debate, despite your debate policy conditions being met, you should suspect you might be wrong and your debate policy might be your savior. The odds are at least high enough to make an effort

Like if there’s a 10% chance you’re dealing with a positive outlier, that’s extremely worthwhile to engage with – that’s highly cost effective overall even though it has poor returns 9 out of 10 times. People’s intuitions often have trouble understanding that the cost/benefit ratio is favorable overall even if the large majority of instances are negative. Also, people often intuitively have trouble understanding that discussing with someone who is probably an idiot (90% chance) but might be a genius (10% chance) can actually be more cost-effective to talk with than someone who is guaranteed to well above average. Talking with one genius (large, positive outlier) can provide more benefit than talking with ten smart people.

The value of positive outliers is highest for experts who already know all the standard stuff and have a hard time making additional progress. When you’re a beginner who doesn’t know much, then finding some above-average knowledge is great. Also, bad ideas can confuse and mislead beginners, but they’re much less dangerous for experts (who already understand why they’re wrong).

Hunting outliers in conversations (or books) is similar to the venture capitalist investment strategy of looking for outlier companies (“unicorns”) and seeking really big wins. Many of those investors expect and plan to lose money on the majority of their investments.

If you do have a discussion that part of you doesn’t want to have, due to a pre-commitment or because you think it’s rational to discuss, then you need to try extra hard to be reasonable, unbiased, objective, fair, civil, etc. You need to put more work than normal into thinking about things from other person’s point of view, being curious, avoiding tribalism, etc. It helps to write down what to do in advance, e.g. as a flowchart. When your intuitions don’t like something that you’re doing, you have to either resolve that problem or else, at least, understand that you’re at much higher risk than normal of being unreasonable, being a jerk, being dismissive, being biased, etc.

Also, if part of you thinks a conversation is valuable but part of you doesn’t want to have it, then you have conflicting ideas. There’s a problem to solve there.

Conclusion

I prioritize considering, “If I’m wrong, and you’re right, and you’re willing to share your knowledge, then how will I find out and improve?” I want there to be good answers to that. They need to be robust and resilient. They need to work even if I think you have dumb ideas, you seem unreasonable to me, you seem obviously wrong to me, we get along poorly, you have poor charisma, you are very unpopular, you rub people (including me) the wrong way, we have poor conversational rapport, we’re not friends, you are a member of a rival political tribe or philosophical school of thought, my intuition hates you, my friends hate you, I’m biased against you, you trigger my irrationalities, you seem really weird, and/or your ideas are extremely counter-intuitive to me.

Debate policies and methodologies are one of the tools that can help you be a good fallibilist who is more seriously open to error correction than most people. Consider creating a debate policy for yourself and also encouraging public intellectuals to make debate policies.

Further Reading and Watching