Hide table of contents

Summary: I explain the context for why I have a public debate policy – both philosophical reasons and experiences. I talk about fallibilism, rationality and error correction. I discuss how, counter-intuitively, my debate policy saves time and energy for me. I suggest that others create debate policies. This follows my previous article, Fallibilism, Bias, and the Rule of Law, which argued in favor of using written rationality policies (which has similar merits to the rule of law).

Fallibilism

I’m a strong fallibilist. I think it’s common to make mistakes without realizing you’re mistaken. And there’s no way to get a guarantee that you haven’t made a mistake about something. We can never correctly have 100% certainty about anything (including fallibilism).

I’m aware of certain symmetries in critical discussions or debates. If I disagree with John, and John disagrees with me, that’s symmetric. Nothing so far indicates that I’m right. If I think John is being an idiot about this issue, and John thinks I’m being an idiot about this issue, that’s again symmetric. Neither of us should conclude that we’re right and the other guy is an idiot; me concluding that I’m the idiot would make equal sense; actually we should both conclude that the situation so far is inconclusive.

Asymmetry

To conclude that I’m right and John is wrong about some ideas, I need to be able to point out some kind of objective asymmetry (and argue that it’s good). What can I say that John can’t mirror?

For example, I might say “Capitalism is great because it allows true freedom.” But John could reply “Socialism is great because it allows true freedom.” That’s symmetric. Both capitalists and socialists can make an unexplained, unargued assertion about how great their system is regarding freedom. So far, based on what we’ve said, no difference between capitalism and socialism has been established. If I still think capitalism is better for freedom, it might be due to background knowledge I haven’t yet communicated.

Trying again, I might continue, “Capitalism gives CEOs the freedom to do whatever they want.” A symmetric reply from John would be “Socialism gives CEOs the freedom to do whatever they want.” But that’s false. It doesn’t. So do we have an objective asymmetry here? Not yet. Capitalism doesn’t actually let CEOs do whatever they want either – e.g. it doesn’t allow hiring hitmen to assassinate rival CEOs. This time, although John couldn’t mirror my statement to advocate for his side, I still didn’t establish an objective asymmetry because my statement was false.

Trying again, I might say “Capitalism gives company leadership the freedom to price their products however they want.” John can’t correctly mirror that by claiming socialism gives company leadership full pricing freedom because it doesn’t. Socialism involves central planners (or the community as a whole, or some other variant) having some control over prices. Now we have an actual difference between capitalism and socialism. Next, we could consider whether this difference is good or bad. To reach a conclusion favoring something over something else, you have to establish a difference between them and discuss whether it’s a positive difference. (You should also consider other issues like whether there are some more important differences.)

So I try to be aware of symmetry and avoid concluding that I’m right when the claims made are symmetrical. Breaking symmetries is fairly hard and requires thoughtful arguments to do well. I also try to consider if a conversation partner could make a symmetrical claim that would be reasonable (or about as reasonable as my own claim or better), in which case I’ll take that into account even if he doesn’t say it.

I believe that I may be wrong, and my conversation partner may be right, and by an effort we may get closer to the truth (that’s a Karl Popper paraphrase). This fallibilist attitude has led me to be interested in discussion and debate, and to be curious about other people’s reasoning when they believe I’m mistaken.

If someone thinks I’m wrong, I’d like to rationally resolve that disagreement. If they don’t want to, that’s OK, and that’s an asymmetry: I believe X and I’m willing to argue my case; they believe X is wrong but they’re not willing to argue their case. Stopping with that asymmetry seems fine to me. It’s not ideal but the problem isn’t my fault.

Written Criticism

When people claim to know I’m wrong about X, I often ask if they know of a criticism of X that I could read. Has anyone ever written down why X is wrong? If they don’t think a refutation of X has ever been written down, then it’s more disappointing if they are unwilling to share their refutation, since they’re claiming it’s a novel contribution to human knowledge that would help me. But it’s still their choice. And actually, if no one has ever written down their reasoning, and they also don’t want to share it, then I’m doubtful that it’s very good, so I’m not very disappointed unless I have some additional reason to think they have a great, unshared point. (If they’ve been working on it for months, and are in the final stages of research or are already writing it up, but they aren’t ready to publish yet, that would be fine and I wouldn’t suspect their point is bad. But that also means I’ll get to read their argument in the reasonably near future.)

In my experience, reasonable people get the point right away that if there’s no where I can go read why they’re right, and they don’t tell me, then I shouldn’t change my mind, and their side of the debate isn’t persuasive. Meanwhile, if I’m sharing my ideas, then my ideas may be publicly persuasive, unlike the counter-arguments that aren’t public. So they’re conceding the public debate to me, and neutral people looking at the current debate should agree with me over them (since my side has arguments and theirs doesn’t) unless they have some other, better knowledge. There’s a clear asymmetry when I make public arguments, but no one makes public counter-arguments. It’s a case where even a strong fallibilist, who strives for objectivity, can easily take sides (tentatively – if anyone shares new arguments later then it’s appropriate to reconsider).

Intuition

People sometimes intuitively think I’m wrong but don’t know how to express their case in words and make good arguments. In that case, there’s an asymmetry because I have arguments in a spoken language and they don’t. If they’re right, it’s hard for me to learn from them and change my mind because they can’t communicate their knowledge to me in words.

In this scenario, many people would be dismissive and say e.g., “Well, let me know if you ever figure out what your point is in words and I’ll consider it then. Otherwise, I guess this debate is done.” I, instead, have developed some techniques for including and discussing inexplicit intuitions in explicit debates. So, if we wanted to, we could actually continue the discussion using those techniques and still try to reach mutual agreement. If someone has intuitive knowledge which they can’t express in words, I don’t think that’s adequate to conclude that their knowledge is incorrect or inaccessible. We don’t have to give up that easily. Rather than dismissing intuitive ideas, we can use explicit processes to understand them better.

A lot of people interested in rationality are pretty dismissive of intuition, and I think that ends up bullying people who aren’t as good at rhetoric, debate and explicit communication. Most people don’t want to engage with the best explicit debaters because they’ll have a bad time. Instead of being helped to express their ideas, they’ll be dismissed when they struggle. I think that’s a tragedy. Besides being mean, it means the majority of people tend to avoid trying to share their knowledge in debates or critical discussions. Some of those people do have important knowledge which they’re being discouraged from sharing.

Participating in Low Quality Discussions

In the past, if I thought someone was making low quality arguments, I tended to give them second, third, fourth, fifth, sixth and further chances. Why? Because I’m super conscientious and didn’t want to dismiss them if they were still claiming that I’m wrong and they know it. I didn’t want to risk that I’m blind, biased or whatever and then, due to my bias, I refuse to listen to good arguments. And even if their first five arguments are bad, their sixth might be good. As long as they make new arguments, it’s problematic to dismiss that based on a trait of the speaker (his past errors). How can you ignore a new idea just because its author has had incorrect ideas in the past?

So some of my debates went too long and ended when the other person decided that he didn’t want to talk anymore. That gave me a clear asymmetry to legitimize stopping. It let me believe (correctly) that I was open to further debate and they weren’t. It let me believe (correctly) that they were taking a risk of avoidably staying wrong, while I wasn’t choosing that.

But it took too much effort. That was less of a problem at first because I was less experienced at critical discussion and debate. So getting more experience in discussions had value to me, even if the quality of the other person’s arguments was poor. However, over time, the mistakes some people made became more and more repetitive instead of interesting, and it became rarer for them to make an argument that was new to me (even a bad one). So I started getting less value for my time spent.

What could I do to reduce the time I spent on low quality discussions for the sake of being open to debate? I didn’t want to simply refuse to talk to anyone I formed a negative judgment of. I didn’t want to think someone seemed dumb, boring, low quality or not worth my time and then ignore them based on that impression. That kind of policy is widespread and I think it’s one of the world’s larger problems. People commonly form incorrect negative judgments of critics and then are unreasonably dismissive, and it prevents them from receiving valuable corrections. Lots of public intellectuals and thought leaders stay wrong about important issues, and spread errors publicly in books, speeches and articles, while refusing to consider counter-arguments. I really, really, really don’t want to be like that, even a tiny bit.

So what could I do to end some discussions earlier which is compatible with my strong fallibilism? I don’t want to just trust my judgment about who doesn’t have a good point, which arguments aren’t worth considering, etc. I want to plan around the possibility that I’m sometimes biased, irrational, wrong, etc.

My Debate Policy Saves Time and Energy

I created a debate policy. It offers written, conditional guarantees, to the general public, about my openness to debate. If anyone thinks I’m ignoring them when I shouldn’t be, or opting out of a debate I should participate in, they can invoke my debate policy. If I were to violate my policy, I would expect it to harm my reputation.

You might expect having an open, publicly-available debate policy would result in me spending more time debating. It does not. There are two main reasons for this.

First, the debate policy puts certain requirements on the debate. People have to meet some conditions. I tried to design the conditions to be limited, minimalist, and achievable by anyone who has a valuable correction for me. The conditions include agreeing to use my preferred debate structure, which imposes stopping conditions on both of us (certain actions must be taken before leaving the debate).

I’ve found that the people I thought were unserious, who I believed made low quality arguments, do not want to agree to any kind of formal debate rules. They self-select out of that. They opt out. While abuse of my policy is possible, it hasn’t happened, and it has some anti-abuse mechanisms built in.

Second, my debate policy enables me to judge discussions as low value and then stop responding with no explanation or with a link to the debate policy. I’m much more comfortable ending discussions than before because, if my judgment is mistaken, there’s a backup plan.

Suppose I have a 99% accuracy when I judge that someone is making bad arguments. In the past, I thought “I want to learn from the 1% of cases where I’m wrong, so I better keep discussing.” Now I think, “If I’m wrong, they can use my debate policy, so there’s still a reasonable way for me to be corrected. So it’s OK to end this discussion.”

Having a failsafe mechanism lets me be far more aggressive about opting out of discussions. Before I had a failsafe, I was super conservative. But now I’ve moved some of my fallibilist conservatism into the failsafe, and gotten it out of other conversations that don’t use my debate policy.

What if My Debate Policy Has an Error?

What if I incorrectly opt out of a discussion and also my debate policy has a flaw? As a strong fallibilist, this is the kind of issue I take seriously. I want to have a plan for that too. What is the backup plan for my debate policy?

I don’t think I need an infinite chain of failsafe mechanisms, but I think having several failsafes is better than one. Variety helps here because I don’t want every failsafe to contain the same flaw.

The primary backup plan for my debate policy is my Paths Forward policy, which I actually developed first. When I added the debate policy, instead of making the Paths Forward policy obsolete, I instead specified that it can still be used but only if the debate policy is failing in some way. The Paths Forward policy is more of a broad, generic opportunity for error correction. It’s less safe against abuse or demands on my time, but it’s more conservative and safe against me potentially making a mistake and not listening to good ideas. So it’s good as a secondary failsafe.

I also have a separate backup plan, which is discussing my debate policy, debate methodology or Paths Forward policy. I’m extra open to discussions on those specific topics, and I’m willing to attempt to discuss them using standard conversational practices in our culture rather than my own preferred discussion methods. I stick much closer to my old conservatism for just those topics. I don’t mind this because I consider them particularly interesting and important topics which I’d actually like to have more discussions about. If you have suggestions for good debate policies or methodologies, or criticism of mine, I especially want to hear it. Overall, being really conservative about avoiding ending discussions on just three topics, instead of all topics, is a big time saver. Plus, it’s uncommon that anyone wants to talk about discussion methodology. I try to bring that topic up sometimes but I find that most people decline to discuss it.

I’m also open to considering and trying out other discussion or debate methodologies if anyone claims to know of a good, rational methodology that is written down. Although my policy doesn’t guarantee trying any methodology regardless of what it says, this is something I’m especially interested in and flexible about. If someone won’t discuss using my methodology and also won’t suggest a methodology they claim is better, and I’m unimpressed by what they say, then I think it’s definitely reasonable and fallibilism-compatible to stop speaking with them.

Prioritizing

In the past, I didn’t prioritize much in discussions. Karl Popper and David Deutsch (fallibilists and advocates of evolutionary epistemology) taught me that we learn and make progress by correcting errors. They underemphasized that some errors matter more than others. The general impression they give is basically that you should correct all the errors that you can find. Finding errors is considered a high value activity and one of the two keys to progress (the second key is correcting the errors you find).

Common sense says to prioritize, but I didn’t find that convincing. It doesn’t adequately explain rational prioritization or why not to correct all the errors (the main reason given is that correcting all the errors would take too long, but I was willing to put a large amount of effort into epistemology in order to try to have better knowledge about it). Doesn’t every error matter?

One answer is that some people are making too many errors to deal with right now. They’re making an overwhelming number of errors and they need to focus their limited time, energy and attention on only some errors. They can’t do everything at once. This fits with the standard view pretty well, but no one said it to me. I figured it out eventually. I guess people didn’t want to admit to making so many errors. By pretending they were making a manageable number of errors, they fooled me into thinking every criticism would be useful to them. Once I started seeing most people as making an unmanageable amount of errors, I started prioritizing a lot more for the criticism I shared. I also tried telling them about what I think their situation is, but I got a lot of denials in response. Some people explicitly keep asking me to share every error I see with them, but if I do that they will (predictably to me) be overwhelmed and have a bad time. Oh well. I’m going to follow my best judgment about what to do (which is to prioritize criticism I share), and if they think I’m wrong and genuinely want more thorough criticism from me, they can invoke my debate policy. If they don’t invoke my debate policy, that signals to me that they aren’t really that serious about wanting to hear all the criticism I can come up with.

I also learned, with the help of Eliyahu Goldratt, a better perspective on prioritizing. This changed my mind more than practical considerations about people being busy or overwhelmed. In brief summary, optimizing non-bottlenecks doesn’t increase throughput. It’s important to identify constraints in a system – limiting factors – and then improve those. Most parts of the system have excess capacity already so improving them isn’t useful. In more philosophical terms, most errors won’t cause failure at current, active goals we care about. Instead, most errors mean we have a little less excess capacity at something but we can succeed without fixing the error. (A different way to frame it is that most “errors” aren’t really errors because they won’t cause failure at a goal we’re pursuing; less excess capacity isn’t actually an error.)

As a programmer, I already had experience with this. When you want to speed up slow software, you look for bottlenecks. The slowness isn’t divided equally or randomly around the code. Almost all the time use is in a small number of code paths. So you speed those up and then either you’re done or there are some new slowest places that you work on next. If you speed up code without prioritizing, it’s very unlikely to be effective. Most code paths are already fast enough (they have excess capacity) so optimizing them isn’t useful.

A Debate Policy Might Cost Time and Energy for You

For some people, having a debate policy would increase rather than decrease the time they spend debating. Why? Because their current policies differ from mine. Instead of being very strongly conservative about ending discussions, due to fallibilism, they currently trust their judgment more and end (or don’t begin) discussions more liberally. I think they would benefit from a debate policy because they’re probably mistaken about some of the times they don’t discuss or end a discussion. Also (as long as you aren’t famous) you might find that people don’t use your debate policy very often. When people do use your debate policy, you may find you like the debates because they meet conditions of your choice that you specified in the policy. Debates that meet your written conditions may be higher quality, on average, than the debates have now. (By the way, you can also make a discussion policy for non-debate discussions.)

If you are famous and would get too many debate requests, or actually just want to have fewer debates for another reason, there are ways to put harder-to-meet conditions on your debate policy (that are still rational) instead of just giving up on having a policy. There are potential solutions (there’s more information in the resources I link at the end).

In general, it can be hard to tell the difference between positive and negative outliers. Both look badly wrong and don’t make sense from your current perspective. Outliers can be people who are very different than you or ideas which are very different than your ideas. Positive outliers are the most valuable people to talk with or ideas to engage with. Dismissing people (or ideas) who seem wrong or counter-intuitive, while having no debate policy, means you’re likely to be dismissive of some positive outliers. Even if you try not to do that, what exactly is going to stop it from happening, besides a debate policy? Trying to recognize when something might be a positive outlier is not a reliable method; that’s a form of trusting yourself instead of planning around you sometimes being biased or failing at rationality. (I explain and criticize trying and trusting more in Fallibilism, Bias, and the Rule of Law. That article advocates pre-commitment to written rationality policies, which it compares with the rule of law.)

Debate policies are an example of pre-committing to something, in writing. People tend to avoid this because “What if I commit to it and then, when the time comes, I don’t want to do it?” But that’s kind of the point: it’s good to do anyway even if you have biases, intuitions or irrationalities that are resisting it. You can’t trust your judgment when the time comes and you think “I want to get out of this debate for rational, good reasons, not due to bias.” It might actually be bias. In the case where your subconscious is resisting engaging in rational debate, despite your debate policy conditions being met, you should suspect you might be wrong and your debate policy might be your savior. The odds are at least high enough to make an effort

Like if there’s a 10% chance you’re dealing with a positive outlier, that’s extremely worthwhile to engage with – that’s highly cost effective overall even though it has poor returns 9 out of 10 times. People’s intuitions often have trouble understanding that the cost/benefit ratio is favorable overall even if the large majority of instances are negative. Also, people often intuitively have trouble understanding that discussing with someone who is probably an idiot (90% chance) but might be a genius (10% chance) can actually be more cost-effective to talk with than someone who is guaranteed to well above average. Talking with one genius (large, positive outlier) can provide more benefit than talking with ten smart people.

The value of positive outliers is highest for experts who already know all the standard stuff and have a hard time making additional progress. When you’re a beginner who doesn’t know much, then finding some above-average knowledge is great. Also, bad ideas can confuse and mislead beginners, but they’re much less dangerous for experts (who already understand why they’re wrong).

Hunting outliers in conversations (or books) is similar to the venture capitalist investment strategy of looking for outlier companies (“unicorns”) and seeking really big wins. Many of those investors expect and plan to lose money on the majority of their investments.

If you do have a discussion that part of you doesn’t want to have, due to a pre-commitment or because you think it’s rational to discuss, then you need to try extra hard to be reasonable, unbiased, objective, fair, civil, etc. You need to put more work than normal into thinking about things from other person’s point of view, being curious, avoiding tribalism, etc. It helps to write down what to do in advance, e.g. as a flowchart. When your intuitions don’t like something that you’re doing, you have to either resolve that problem or else, at least, understand that you’re at much higher risk than normal of being unreasonable, being a jerk, being dismissive, being biased, etc.

Also, if part of you thinks a conversation is valuable but part of you doesn’t want to have it, then you have conflicting ideas. There’s a problem to solve there.

Conclusion

I prioritize considering, “If I’m wrong, and you’re right, and you’re willing to share your knowledge, then how will I find out and improve?” I want there to be good answers to that. They need to be robust and resilient. They need to work even if I think you have dumb ideas, you seem unreasonable to me, you seem obviously wrong to me, we get along poorly, you have poor charisma, you are very unpopular, you rub people (including me) the wrong way, we have poor conversational rapport, we’re not friends, you are a member of a rival political tribe or philosophical school of thought, my intuition hates you, my friends hate you, I’m biased against you, you trigger my irrationalities, you seem really weird, and/or your ideas are extremely counter-intuitive to me.

Debate policies and methodologies are one of the tools that can help you be a good fallibilist who is more seriously open to error correction than most people. Consider creating a debate policy for yourself and also encouraging public intellectuals to make debate policies.

Further Reading and Watching

Comments18
Sorted by Click to highlight new comments since: Today at 9:51 AM

Browsing this post, here are some questions. Answer any you like, or just ignore them. I thought your write-up was really interesting.

  • Do you have a list of asymmetry statements applicable to common disagreement types, or even just a list of disagreement types or asymmetry statements? 
  • The bottleneck concept analogizes well. I'm curious about your criteria for a produced debate. "A debate is produced when..." when what?
  • When do you decide to write in paragraphs versus lists? I struggle with this, "Should I use a paragraph here, or a list header and list?" I'm rarely happy with whatever I choose.
  • I dislike reading any write-up that's nothing but an outline. However, they could offer efficiency benefits for writing. What do you think of outlines? 
  • Have you adopted graphical tools or notation systems to simplify or streamline your write-up of various arguments?
  • When you do offer criticisms about errors, what are they (fallacies, truth tables of statements and their logical forms,entailment/definition errors, factual corrections, all of the above, other)?
  • What is the most common type of criticism correction you offer? 
  • What part of an argument (validity of logical connections between assertions, truth of premises,  truth of conclusion, clarifying assertions) gets most of your attention in practice?
  • If you could prescribe a controlled English vocabulary and grammar to use during arguments, what would you strike from your controlled English?

and I can think of a few more things, but I'll hold off.

The bottleneck concept analogizes well. I'm curious about your criteria for a produced debate. "A debate is produced when..." when what?

Is "produced" a typo for "productive"?

Have you adopted graphical tools or notation systems to simplify or streamline your write-up of various arguments?

Yes, tree diagrams of ideas, debates, paragraphs and/or sentence grammar.

Idea trees info (the first link there has an actual essay).

Video: Philosophical analysis of Steven Pinker passage | Everything explained from grammar to arguments I think this video, along with the 8 video gigahurt discussion series, would give you a better concrete sense of how I approach discussion and critical analysis. Warning: that's like 20 hours of total video. You might want to just skim around a bit to get a general sense of some of it.

What part of an argument (validity of logical connections between assertions, truth of premises, truth of conclusion, clarifying assertions) gets most of your attention in practice?

I try to prioritize only issues that would lead to failure at active, relevant goals such as reaching agreement, rather than bringing up "pedantic" errors that could be ignored. (People sometimes assume I purposefully brought up something unimportant. If you don't see why something is important, but I brought it up, please ask me why I think it matters and perhaps mention what you think is higher priority. Note: me explaining that preemptively every time would have substantial downsides.) In practice in the last 5 years, I frequently talk about issues like ambiguity, misquoting, logic, bias, factual errors, social dynamics, or not answering questions. Also preliminary or meta issues like whether people want to have a conversation, what kind of conversation they want to have, what conversation methods they think are good, whether they think they have something important and original to say (and if not, is the knowledge already written down somewhere, if so where, if not why not?). Some of those topics can be very brief, e.g. a yes/no answer to whether someone wants to have a serious conversation can be adequate. I used to bring those topics up less but I started focusing more attention on them while trying to figure out why talking about higher level explanations often wasn't working well. It's hard to successfully talk about complex knowledge when people are making lots of more basic mistakes. It's also hard to talk while having unstated, contradictory discussion expectations, norms or goals. In general, I think people in conversations communicate less than they think, understand each other less than they think, and gloss over lots of problems habitually. And this gets a lot worse when the people conversing are pretty different than each other instead of similar – default assumptions and projection will be wrong more – so as a pretty different person, who is trying to explain non-standard ideas, it comes up more for me.

More comments later probably.

Thank you for the reply, Elliot. 

Is "produced" a typo for "productive"?

No. You drew an analogy between debating and factory production with your reference to Goldratt's The Goal. You mentioned bottlenecks when addressing debates. My question was intended to extend your analogy appropriately, that is, to ask:

If a debate is a product, and you intend to raise throughput and remove bottlenecks, how do you decide when a debate is produced as opposed to still in production?

Oh. One way to determine the end of a debate is "mutual agreement or length 5 impasse chain". Many other stopping conditions could be tried.

If you want to improve the debating throughput, I think you'll want to measure the value of a debate, not just the total number of debates completed. A simple, bad model would be counting the number of nodes in the debate tree. A better model would be having each person in the debate say which nodes in the debate tree involved new value for them – they found it surprising, learned something new, changed their mind in some way, were inspired to think a new counter-argument, etc. Then count the nodes each person values and add the counts for a total value. It's also possible to use some of the concepts without having measurements.

So I was reviewing your material on trees but I got a bit lost. Do you and your debate opponent each create a tree of assertions that you modify as you progress through a learning/debate process? If so, what defines the links between nodes?

You wrote something about a child node contradicting a parent that I didn't get at all. I can track down the quote, if that's helpful. 

EDIT: found the quote! 

Decisive (also called conclusive or essential) arguments argue that the parent is incorrect. That implies that at least one of the parent or argument must be incorrect.

A picture would make this easier to understand. 

You introduce what I take to be several types of node structures:

  • conclusive/essential arguments
  • positive arguments
  • inconclusive negative arguments
  • explanatory comments
  • claims
  • facts
  • explanations of claims

I'm not sure if all those are nodes or some refer to node groups, I couldn't find visual examples to make that clear.

I have not tried using a tree to model a decisive argument where the rejected claim is listed in the tree. To do something similar, I would create a root node for the conclusion "It is not the case that <<claim>>" with premises as child nodes.

But your node system, did you develop these node choices from experience because you find them more helpful than some alternatives, or are they part of a formal system that you studied, or is their origin something else? 

With a tree built on premises and conclusions, the root node is the final conclusion.  I learned to structure all arguments in textual outline form, and mostly stick with that, it's what I was taught. 

There are plenty of examples on the forum of folks who write entire posts as an argument, or outline their arguments or points, so we are in good company. 

However, I would  love to learn an algorithmic process for how two debaters work from separate trees to a single combined tree, whether it uses textual outlines or tree graphics.  Are you aware of something like that or does your current  system allow that? It would be new to me.

I'm not sure if all those are nodes or some refer to node groups

In general, any node could be replaced by a node group that shows more internal detail or structure. Any one idea could be written as a single big node or a group of nodes. Node groups can be colored or circled to indicate that they partly function as one thing.

what defines the links between nodes?

For conversation related trees, child nodes typically mean either a reply or additional detail. Additional detail is the same issue as node groups.

For replies, a strict debate tree would have decisive refutation as the only type of reply. You could also allow comments, indecisive arguments, positive arguments and other replies in a tree – I'd just recommend clear labelling for what is intended as a decisive refutation and what isn't.

But your node system, did you develop these node choices from experience because you find them more helpful than some alternatives, or are they part of a formal system that you studied, or is their origin something else?

Karl Popper developed a fallibilist, evolutionary epistemology focused on criticism and error correction. He criticized using positive (supporting or justifying) arguments and recommended instead using only negative, refuting arguments. But he said basically you can look at the critical arguments and evaluate how well (to what degree) each idea stands up to criticism and pick the best one. While trying to understand and improve his ideas, I discovered that indecisive arguments are flawed too, and that ideas should be evaluated in a binary way instead of by degree of goodness or badness.

Trees and other diagrams have a lot of value pretty regardless of one's views on epistemology. But my particular type of debate tree, which focuses on decisive refutations, is more specifically related to my epistemology.

However, I would love to learn an algorithmic process for how two debaters work from separate trees to a single combined tree, whether it uses textual outlines or tree graphics. Are you aware of something like that or does your current system allow that? It would be new to me.

It's useful to independently make trees and compare them (differences can help you find disagreements or ambiguities) or to make a tree collaboratively. I also have a specific method where both people would always create identical trees – it creates a tree everyone will agree on. I've written this method down several times but I wasn't able to quickly find it. It's short so I'll just write it again:

Have a conversation/debate. Say whatever you want. Keep a debate tree with only short, clear, precise statements of important arguments (big nodes or node groups should be avoided, though aren't strictly prohibited – I recommend keeping the tree compact but you don't necessarily have to. you can make a second tree with more detail if you want to). This tree functions as an organizational tool and debate summary, and shows what has an (alleged) refutation or not. Nodes are added to the tree only when someone decides he's ready to put an argument in the tree – he then decides on the wording and also specifies the parent node. Since each person has full control over the nodes he adds to the tree, and can add nodes unilaterally, there shouldn't be any disagreements about what's in the tree. Before putting a node in the tree, he can optionally discuss it informally and ask clarifying questions, share a draft for feedback, etc. The basic idea is to talk about a point enough that you can add it to the tree in a way where it won't need to be changed later – get your ideas stable before putting them in the tree. Removing a node from the tree, or editing it, is only allowed by unanimous agreement.

Really interesting, Elliot.

I need time to review what you wrote and try some things out. If you have any more writing on these methods to point me to, or graphical examples, I would like to see them.

Thanks!!

How's it going?

Good, Elliot, it's going good . I replied on your shortform. Read there to find out more about my work on my debate policies and my study of your tree-based debate process.

There's much more writing. See e.g. Multi-Factor Decision Making Math and some articles in the Classics and Research sections at https://criticalfallibilism.com And the Idea Trees Links article has many examples as do my videos.

This is interesting, I can get your reasoning behind this. I'm also sure this helps you.

However, I would have liked a few more things :

  1. While your debate policy works for you, it sets a very high bar. Debating with people making 20 articles... well this greatly restrains the pool of people to talk to, so I don't see this applied on the EA Forum.
    1. It could be interesting, for instance, to propose a debate policy that could work for people on the EA Forum.
    2. Also, proposing how such a policy can be implemented at a "forum-level scale", not just at an individual level
  2. Your post is interesting, but I feel like it could improve by having examples (same in your debate policy I think). There are one or two at the beginning but only at the beginning. I like seeing examples to anchor in my mind what that these rules would look like in a real conversation. 

I've never actually rejected a debate request because a person had fewer than 20 articles. However, that is what I'm comfortable offering as a guarantee to the general public, which is binding on me even if I don't like someone, think their argument is dumb, think their debate request is in bad faith, and would prefer to ignore them. 20 articles is a bar anyone can meet if they care enough. I don't want to promise in writing to debate with any person who asks, who doesn't meet that bar, no matter what.

Here's some more concrete writing about rationality policies: Using Intellectual Processes to Combat Bias

Would you (or anyone else reading this) be interested in debating something about EA with me? (I don't think finding a topic we disagree about will be difficult if we try.)

This article is interesting - and I agree that the base case you make that bias is hard to counter without a due process is convincing. 

The policy you put forward (for you or Jordan Peterson) sounds good for public intellectual who are really busy. 

I have more trouble seeing how that would play out in the EA forum, however.

 

As for me personally, I'm not sure that I will use it now - as I feel like I agree with the points you make, I'm less busy than you so I answer to everything even if I don't agree with it, and I already try to do all you said in my mind.

(which might sound like from the outside exactly like bias- but I feel like I have a track record of changing my viewpoint on complicated  topics as I got better information, even for some core questions like "Is industrial civilization good?" or "Is capitalism good?"). 

I might use such a policy in the future in the future if I feel this would be useful as I debate more people, however.

 

For debating an EA related thing, I'm not sure I have a lot to debate on besides the basics. Maybe the following claims that I have in mind ?

  • Not being vegetarian causes a lot of suffering (unless you somehow manage to get food from scarce places with non-factory farming) 
    • Conversely, donating to effective animals charities is one of the very top ways to reduuce suffering in the world
  • Meditation, if done right, is one of the best ways to improve one's mind
  • Limits on energy depletion will have very serious effects on the world in the next 10-20 years, including but not limited to a probable long-term economic degrowth (this is one of the topic of my post)

Would you (or anyone else reading this) be interested in debating something about EA with me?

This is a yes or no question, but you didn't give a direct or yes/no answer to it. I choose to communicate more directly and literally than you do. The value of literalness in communication is actually one of the topics I'd have an interest in debating. You don't have to answer, but I wanted to repeat the question because not answering looked most likely accidental.

Oh, you're right, I wasn't clear enough. This feedback is appreciated.

Then my answer is "Yes, I agree", depending on the topic of course.

OK cool. I'm most interested in debating topics related to methodology and epistemology. They have larger potential impact than more specific topics, and they're basically prerequisites anyway. I don't think we'd be able to discuss e.g. animal welfare, and agree on a conclusion, without some methodology disagreements coming up mid-discussion and having to be resolved first.

The specific issue I'd propose to debate first is:

As for me personally, I'm not sure that I will use it now - as I feel like I agree with the points you make, I'm less busy than you so I answer to everything even if I don't agree with it, and I already try to do all you said in my mind.

(which might sound like from the outside exactly like bias- but I feel like I have a track record of changing my viewpoint on complicated topics as I got better information, even for some core questions like "Is industrial civilization good?" or "Is capitalism good?").

It does sound like bias to me, as you predicted. And I don't think trying to do rationality things in your mind is adequate without things like written policies and transparency. So we have a disagreement here.

Ok, very well. 

I must admit that I'm usually not that interested by things like methodology and epistemology, as I associate that with bureaucracy in my head. But I agree that they are important - I just want to avoid the pitfall where this gets too abstract.

Maybe I'll start with the methodology I use to gather information (I use it implicitely in my head, but I don't know if writing it down somewhere would change anything).

  1. Let's say I get a new information (say, that a serious drop in energy means a serious drop in economic growth). This is an idea that:
    1. It's new - I haven't heard it anywhere else
    2. It is supported by data, like say this graph :
    3. I have a rough idea of how the conclusion was obtained (I can trace back to a study or a book) - the source is OK
    4. It makes logical sense (the economy produces goods and services, and you need energy for that) - I see no internal flaw in this reasoning
    5. I don't have a serious counter point for that
    6. It's better than the previous explanation I had (economic growth is caused only by labor, capital and human ingenuity - which misses out on the fact that you need resources to produce goods and services)
  2. In such a case, what I do is : I accept the conclusion, as "best temporary explanation", and I live with it
  3. If I find later a better explanation, I accept it (if it is more complete, with data more precise or more recent, it provides good counterpoints to the previous line of thinking I had, or has a more reliable source)

 

Now, the weakest part of this is number 1.5 : there may be good counterpoints but I may not be aware of them (for instance, one could say that we can do decoupling and still grow the economy with less energy). There are 2 different cases:

  1. If this is not on an important topic, or it's an information I can't really act on, then I don't do more research - maybe as I read more general stuff I'll stumble over something better ?
  2. If it's an important information (like if we will have less energy in the next decade this would mean a very large recession), then I try to dig in more into it
    1. By reading books and articles. I try mostly to read experts that aggregated a lot of interesting data in a big picture view, and for whom I've found little criticism. They often provide useful links. 
      1. Note that while I read scientific papers, that's rarely where I learn the best, since few of them provide a big picture, and their writing style is poorly suited to human psychology.
    2. For stuff that I write (like a book or article), I need to step up my game. Then I try to find reviewers who know their stuff - the quality of what I write depends of the quality of my reviewers. If I find one that I disagree with, great ! It happened with the energy descent post, I exchanged a lot with Dave Denkerberger who was very knowledgeable, so I had to find good counterarguments, or accept his conclusion (which I did on several occasions).
      1. For the energy/GDP stuff, for instance, we had only 2 or 3 graphs each - which was not enough. So I had some doubts about the validity of my data, I digged deeper, read about a dozen papers on ecological economics... and found that, surprisingly, the energy/GDP relationship was even more supported by data than I initially envisioned. 
  3. If I have two concurrent explanations that contradict each other, or if the data is poor on both sides, I flag the data point as "contested" in  my head and I try not use it in my reasoning, until I've done more research (this is the case for the causality of energy/GDP, whether "GDP causes energy" or "energy causes GDP" or both. There is no consensus)
    1. However, even if they disagree on causality, studies still indicate that a high GDP needs a lot of energy. Good enough, I use that instead. 

 

Now, this is very rough, I agree, but I feel like I learned a lot, and changed my views on a wide range of topic, so I feel like this kinda works so far.

There may room for improvements, of course. What do you think about it ?

I created a debate topic at https://forum.effectivealtruism.org/posts/gL7y22tFLKaTKaZt5/debate-about-biased-methodology-or-corentin-biteau-and

I will reply to your message later.

Please let me know if you have any objections to my summary of what the debate is about.

Curated and popular this week
Relevant opportunities