Hide table of contents

Cross-posted from the Charity Entrepreneurship blog.

Acknowledgements. I’d like to thank Spencer Greenberg for both inspiring the original idea with Clearer Thinking’s Belief Challenger tool and for coming up with a much better name for the concept than my original “steelmanning back and forth”.

I have a tool for thinking I call “steelman solitaire” that I have found comes to much better conclusions than doing “free-style” thinking, so I thought I should share it with more people. In summary, it consists of arguing with yourself in the program Workflowy, alternating between writing a steelman of an argument, a steelman of a counter-argument, a steelman of a counter-counter-argument, etc. (I will explain steelmanning later in the post; in brief, it is the opposite of a strawman argument. Steelmanning means trying present the strongest possible version of an opposing view.) In this blog post I’ll first explain the broad steps, then list the benefits, and finally, go into more depth on how to do it. 

BENEFITS

  1. Structure forces you to do the thing you know you should do anyway. Most people reading this already know that it’s important to consider the best arguments on all sides instead of just considering the weakest on the other. Many already know that you can’t just consider a counter-argument then consider yourself done. However, it’s easy to forget to do that. The structure of this method makes you much more likely to follow through with your existing rational aspirations. 
  2. Clarifies thinking. I’m sure everybody has experienced a discussion that’s gone all over the place, and by the end you’re more confused than when you started. Points get lost and forgotten while others dominate. This approach helps to organize and clarify your thinking, revealing holes and strengths in different lines of thought. 
  3. More likely to change your mind. As much as we aspire not to, most people, even the most competent rationalists, will often become entrenched in a position due to the nature of conversations. In steelman solitaire, there’s no other person to lose face to or to hurt your feelings. This often makes it more likely to change than your mind than a lot of other methods.
  4. Makes you think much more deeply than usual. A common feature of people I would describe as “deep thinkers” is that they’ve often already thought of my counter-argument, and the counter-counter-counter-etc-argument. This method will make you really dig deeply into an issue. 
  5. Dealing with steelmen that are compelling to you. A problem with a lot of debates is that what is convincing to the other person isn’t convincing to you, even though there are actually good arguments out there. This method allows you to think of those reasons instead of getting caught up with what another person thinks should convince you. 
  6. You can look back at why you came to the belief you have. Like most intellectually-oriented people, I have a lot of opinions. Sometimes so many that I forget why I came to hold them in the first place (but I vaguely remember that it was a good reason, I’m sure). Writing things down can help you refer back to them later and re-evaluate. 
  7. Better at coming to the truth than most methods. For the above reasons, I think that this method makes you more likely to come to accurate beliefs. ​

THE BROAD IDEA

Strawmanning means presenting the opposing view in the least charitable light, often so uncharitably that it does not resemble the view that the other side actually holds. The term of steelmanning was invented as a counter to this; it means taking the opposing view and trying to present it in its strongest form. This has sometimes been criticized because often the alternative belief proposed by a steelman also isn’t what the other people actually believe. For example, there’s a steelman argument that states that the reason organic food is good is because monopolies are generally bad and Monsanto having a monopoly on food could lead to disastrous consequences. This might indeed be a belief held by some people who are pro-organic, but a huge percentage of people are just falling prey to the naturalistic fallacy. 

Nonetheless, while steelmanning may not be perfect for understanding people’s true reasons for believing propositions, it is very good for coming to more accurate beliefs yourself. If the reason you believe you don’t have to care about buying organic is because you believe that people only buy organic because of the naturalistic fallacy, you might be missing out on the fact that there’s a good reason for you to buy organic because you think monopolies on food are dangerous.

However, and this is where steelmanning back and forth comes in, what if buying organic doesn’t necessarily lead to breaking the monopoly? Maybe upon further investigation, Monsanto doesn’t have a monopoly? Or there are multiple organizations who have copyrighted different gene edits so there’s no true monopoly? 

The idea behind steelmanning solitaire is to not stop at steelmanning the opposing view. It’s to steelman the counter-counter-argument as well. As has been said by more eloquent people than myself, you can’t consider an argument and counter-argument and consider yourself a virtuous rationalist. There are very long chains of counter^x arguments, and you want to consider the steelman of each of them. Don’t pick any side in advance. Just commit to trying to find the true answer. 

This is all well and good in principle but can be challenging to keep it organized. This is where Workflowy comes in. Workflowy allows you to have counter-arguments nested under arguments, counter-counter-arguments nested under counter-arguments, and so forth. That way you can zoom in and out and focus on one particular line of reasoning, realize you’ve gone so deep you’ve lost the forest for the trees, zoom out, and realize what triggered the consideration in the first place. It also allows you to quickly look at the main arguments for and against. Here’s a worked example for a question.

TIPS AND TRICKS

That’s the broad-strokes explanation of the method. Below, I’ll list a few pointers that I follow, though please do experiment and tweak. This is by no means a final product. 

  • Name your arguments. Instead of just saying “we should buy organic because Monsanto is forming a monopoly and monopolies can lead to abuses of power”, call it “monopoly argument” in bold at the front of the bullet point then write the full argument in normal font. Naming arguments condenses the argument and gives you more cognitive workspace to play around with and also allows you to see your arguments from a bird’s eye view. 
  • Insult yourself sometimes. I usually (always) make fun of myself or my arguments while using this technique, just because it’s funny. Making your deep thinking more enjoyable makes you more likely to do it instead of putting it off forever, much like including a jelly bean in your vitamin regimen to incentivize you to take that giant gross pill you know you should take. 
  • Mark arguments as resolved as they become resolved. If you dive deep into an argument and come to the conclusion that it’s not compelling, then mark it clearly as done. I write “rsv” at the beginning of the entry to remind me, but you can use anything that will remind you that you’re no longer concerned with that argument. Follow up with a little note at the beginning of the thread giving either a short explanation detailing why it’s ruled out, or, ideally, just the named argument that beat it. 
  • Prioritize ruling out arguments. This is a good general approach to life and one we use in our research at Charity Entrepreneurship. Try to find out as soon as possible whether something isn’t going to work. Take a moment when you’re thinking of arguments to think of the angles that are most likely to destroy something quickly, then prioritize investigating those. That will allow you to get through more arguments faster, and thus, come to more correct conclusions over your lifetime. 
  • Start with the trigger. Start with a section where you describe what triggered the thought. This can often help you get to the true question you’re trying to answer. A huge trick to coming to correct conclusions is asking the right questions in the first place. 
  • Use in spreadsheet decision-making. If you’re using the spreadsheet decision-making system, then you can play steelman solitaire to help you fill in the cells comparing different options. 
  • Use for decisions and problem-solving generally. This method can be used for claims about how the universe is, but it can also be applied to decision-making and problem-solving generally. Just start with a problem statement or decision you’re contemplating, make a list of possible solutions, then play steelman solitaire on those options. 

CONCLUSION

In summary:

  • Steelman solitaire means steelmanning arguments back and forth repeatedly
  • It helps with:
    • Coming to more correct beliefs
    • Getting out of unproductive conversations
    • Making sure you do epistemically virtuous things that you already know you should do
  • The method to follow is to make a claim, make a steelman against that claim, then a steelman against that claim, and on and on until you can’t anymore or are convinced one way or the other

36

0
0

Reactions

0
0

More posts like this

Comments4


Sorted by Click to highlight new comments since:

Thanks for posting. I've been using this at work today and it's been working very well.

steelman solitaire

I’d like to thank Spencer Greenberg for both inspiring the original idea with Clearer Thinking’s Belief Challenger tool and for coming up with a much better name for the concept than my original “steelmanning back and forth”.

Super pedantic nitpick, but isn't solitaire by definition a one-player game?

I like the alliteration and the concept though.

That's true! And steelman solitaire is indeed played by yourself, hence the name.

Oh my bad for totally failing on reading comprehension. I'm sleepy today. Thanks!

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 8m read
 · 
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.  The ideas I think could have the highest impact are:  1. Government placements/secondments in key GHW areas (e.g. international development), and 2. Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.  I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.  I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space! Introduction I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here). Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion. At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in