I recently published an article in the Georgetown Security Studies Review (GSSR) on the use of "argument management systems" (e.g., Kialo) for the complex debates that arise in fields where it's often impractical to resolve disagreements through standard empirical methods. I've long been confused why this method of discussion is not more widely supported in EA and Rationalist circles, and am considering adapting my GSSR article to AI policy/safety research (or Longtermism more generally, which various people have criticized for being too speculative/theoretical rather than being based on empirical tests) and posting it here. However, before doing that I would love to get a sense of people's reasons for skepticism or apathy towards such methods, so I could potentially address them in the post.

For what it's worth, I have seen Leverage Research's report on the topic, and I am aware of the criticism that "argument mapping" (in some formats) is overly formal and too complicated. (I plan to respond to these points)

In short, I expect my argument to be fairly similar to what I laid out in my GSSR article: the way we currently present arguments (i.e., predominantly through prose/paragraph text) seems rife with points of failure and inefficiencies, especially given how debates are often not linear but rather branch and have cross-cutting points. In fields like international relations and peace/conflict studies I've repeatedly encountered instances where people fail to (seriously) address existing counterarguments, and more generally it is hard for audiences to determine who has or hasn't addressed counterarguments. In contrast, I think that making one's arguments more explicit and keeping track of the arguments in a more-searchable and more-permanent format than "memory" or prose would help to mitigate some of these problems. 

To me, better methods of argumentation seems like a natural extension of norms that promote statistics/experimental methods in science, but thus far I've found the EA/Rationalist communities fairly lukewarm towards the ideas (even if they are more receptive on average than the general public).




New Answer
New Comment

4 Answers sorted by

I've toyed around with Kialo. Here are some thoughts about why it doesn't catch on:

  • Argument mapping has disadvantages for politics and career-building. It doesn't allow the rhetoritician to slant the discussion, and it sharply limits the ability to gain credit by highlighting a point that's already been made, or restating it in a different way.
  • The platforms themselves do a poor job of attracting eyeballs, so even if somebody did a great job of building up the Kialo form of some argument, it would go unappreciated.
  • For those who take argument mapping seriously, we already have existing traditional forms that achieve much of the benefit of Kialo. The main benefit of Kialo is for helping amateurs and beginners navigate the complexities of an argument they're new to, but beginners and amateurs are those least likely to care about getting all the nuances of an argument.
  • Kialo's short response format and lack of citations makes it hard to connect a point with a broader body of literature, and means that misunderstandings are likely.
  • For those trying to use Kialo to understand an argument, it's a terribly unreliable resource.
  • Kialo specifically makes it hard to figure out which arguments are important. I think there's not much signal in its voting mechanism for impact.
  • It's less pleasant to read argumentation on Kialo than in a more traditional format.
  • The amount of objections is overwhelming in some cases, while other arguments are missing.

For the future, I expect that additional objections will come to the fore, such as ChatGPT and other LLMs being able to produce personalized sets of arguments and counterarguments based on natural language prompts (which it can already do to some extent).

It seems very plausible that the straightforward explanation you point to is true, because people aren't aware of it, and they haven't had the usefulness of the method explained. 

Rationalists do a lot of argument mapping under the label double crux (and similar derivative names: crux-mapping). I would even argue that double crux approach to argument mapping is better than the standard one, and rationalists integrate explicit argument mapping to their lives more than likely any other identifiable group.

Also: more argument mapping / double-cruxing / ... is currently unlikely to create more clarity around AI safety, because are constrained by Limits to Legibility, not by ability to map arguments.

I think what nuno is saying is true to an extent, more people would do argument mapping if they knew about it. I think another reason is that a lot of people are uncomfortable from a technical standpoint engaging with math/logic/proofs, so there is inherently more demand for prose because pretty much everyone who would engage would logic could also engage with prose but not the reverse.

It’s sorta like research papers vs the articles summarizing them. Usually an article that summarizes the paper in a low fidelity way has more demand (even ignoring the fact that it’s printed in a more read space). Of course, lots of research papers are written still. But professors aren’t really writing papers in response to the demand curve of the crowd. They might care but at the end of the day they are following the citation and job incentive gradients. Meanwhile the only incentive I am provided is internet points.

For instance, I posted a mathematical formalization of when to focus on trying to increase the quality of the future vs reduce x-risk. my intuition is that the post would have gotten (a bit) more engagement if I wrote it in prose, even though I think the value of the post is an oom+ higher in the way I wrote it.

Either way strong strong strong agree, writing and reading prose is not an effective way to do research at scale (or perhaps at all, but to a lesser degree).