Big thanks to Aaron Gertler for providing advice and reviewing/commenting on a draft of this article
Edit note: I have come to think that “trajectory/uniqueness” should just be renamed “counterfactuality”, which would change the framework acronym. Subsequently, “COILS” might be a decent acronym/name for the framework. However, I have not yet been motivated to update this post to reflect these updates.
Update: I have now written a shorter introduction to this concept
Summary
-
In this article I describe a framework that, in my experience, can make one’s pro-con analysis for some decisions more accurate, faster, or clearer. This includes many decisions relevant to EA activities, such as specific career choices, selecting research topics, etc.
-
The framework, which I am temporarily calling the TUILS (“tools”) framework[1], essentially breaks down pros and cons into smaller, collectively-exhaustive and arguably mutually-exclusive concept components. (I particularly think these characteristics help distinguish it from many decision-making gimmicks I've seen)
-
Specifically, the framework breaks down advantages {disadvantages} as follows: Summary claim: This plan would cause X, which is good {bad}.
- Trajectory/Uniqueness: X would not happen without this plan;
- Implementation: This plan involves a set of actions which can/would be implemented;
- Linkage: Implementation of this plan will result in X;
- Significance: X is a good {bad} thing.
-
Although it resembles the classic “importance, neglectedness, tractability” (INT) framework for cause prioritization, it is meaningfully different in that it focuses on evaluating specific decisions’ advantages and disadvantages rather than trying to evaluate/compare cause areas, which enables it to avoid some of the conceptual flaws associated with the INT framework.
-
Ultimately, I do not want to portray this as some radically new and innovative tool; I recognize that it may seem fairly intuitive and/or that some analyses already account for things like counterfactual outcomes. However, I have not really encountered a generalized framework of this sort whereas reference to the INT framework is rather ubiquitous. Additionally, one could say that the INT framework is also somewhat intuitive, yet I would argue that it (like the TUILS framework) is a relatively simple way to mitigate one’s susceptibility to faulty assumptions and/or common biases (e.g., confirmation bias) and it can help normalize and semi-standardize terms in discussions about decision analysis.
Roadmap/Outline
- Disclaimers, epistemic status, etc.
- Explaining the framework
- Basic overview, including the four components
- Further theoretical contentions/characteristics
- Comparison with the INT framework
- Example usage
- Justifications for learning/using the framework
- Important characteristics
- General/indirect justifications
- Specific justifications
- Potential downsides
- Conclusion
Disclaimers, epistemic status, and related meta-notes.
- TUILS vs. the “Stock Issues” framework: TUILS is based on but also moderately different from the “stock issues” framework that is common in competitive policy debate. I decided to present this framework as distinct in part because I have found that there are many different views on the stock issues (e.g., which issues are included, how each one is defined) and more generally because I just don’t want to deal with the conceptual baggage of policy debate theory. Also, the name “stock issues” sometimes gives people the initial impression that it is primarily about stocks, which it is not.
- This is essentially a rework of one of my previous forum posts: Introducing the Stock Issues Framework: The INT Framework's Cousin and an "Advanced" Cost-Benefit Analysis Framework - EA Forum. The main reason for this rework is to improve the way I present the material based on some feedback I received as well as some minor updates to the framework (see epistemic status).
- A communication disclaimer: Introducing potentially new/unfamiliar frameworks like this can involve miscommunication—especially when trying to balance communicative precision and accuracy with communicative efficiency; in this post, I will probably err a bit on the side of the latter. I am open to addressing any points of confusion as they arise.
- Epistemic status:
- In summary, there are a few related but distinct main claims in this post:
- “TUILS is logically valid”: I think it’s highly likely (>85%, moderate confidence) that the framework I present is “valid” in the sense that its components are collectively exhaustive and mutually exclusive (with the possible exception of implementation and linkage), to the extent that if used correctly it produces the correct answer.
- “Something like TUILS would be generally useful/worthwhile for EA-aspirants to at least casually learn (such as to the extent that EA-aspirants typically become familiar with the INT framework)”: I’d say this is likely (>75%) but only with low confidence since 1) I have not been able to really debate the issue with any non-policy debaters at length, and 2) I recognize my own experience/familiarity with this framework heavily colors my view of its usefulness and its ease of learning.
- Further background for my epistemic status: I spent a few years learning and using the “stock issues” framework in high school policy debate, mulled over and used the concept for years after I graduated high school, lightly discussed it with people, and wrote some blog posts where I reformulated my ideas, but I’ve never really debated it at length with anyone who was not a policy debater. I have personally refined my views a couple times on this, which is suggestive that I may choose to update my formulation/conception of the framework again, but over time my revisions/updates have tended to become less and less substantive to the extent that I would say I’ve maintained the same core idea for at least the past ~2 years. For example, in comparison with my previous EA forum post on this last year, the main differences are just renaming of some concepts, minor-moderate rewording of trajectory/uniqueness (formerly inherency), abandoning the 2x2 matrix, etc. (as opposed to substantive rework, removal, or addition of any components).
- In summary, there are a few related but distinct main claims in this post:
Describing the TUILS Framework
As with things like Bayesian reasoning and the INT framework, there is a spectrum of informal and formal ways to define/use this framework. The following explanation gives a sort of middle-ground version, since it reflects what I tend to actually use in practice and because it is easier to both explain the concepts and illustrate the framework’s overall usefulness this way.
Basic overview, including the four components:
The framework consists of four collectively-exhaustive and (arguably) mutually-exclusive conceptual components: trajectory/uniqueness, implementation, linkage, and significance. Each component encapsulates a logical part of a claimed advantage or disadvantage, as will hopefully be made clearer as I go through each one (and bring them all together):
- Trajectory/Uniqueness. This component essentially asks “What will happen in the world without the proposed action/change (as it relates to the claimed advantage/disadvantage)?” As with the other components, answering this often involves asking common sub-questions, such as “what kinds of policies/projects are currently (or will soon/eventually be) in place that deal with this”; “what is the current trend with the problem in question”; “will this problem eventually resolve itself/what’s the expected timeline for this issue”; etc. For disadvantages in particular, a broad way of phrasing this question is “(to what extent) will this problem occur even if the plan is not put into place?” Especially when doing retrospective analysis on the pros and cons of an action, this component essentially refers to counterfactual analysis (i.e., “what would have happened if we did not take those actions?”).
- Implementation. When formulating and presenting a course of action, it’s common to use phrases like “we will do X” (perhaps with details like “using Y resources”, “following Z timeline”, and so on). This component, however, calls into question the underlying assumptions baked into the plan regarding implementation, essentially asking “how will the proposed action/change actually be implemented? Can we actually perform the specified task (within the given deadline, with the given budget, etc.)?” For disadvantages in particular, the questions tend to be more along the lines of “does the plan actually do what the disadvantage implies?” (e.g., “is there actually no grandfather clause?”). Ultimately, it’s important to understand implementation in concert with its close counterpart: linkage.
- Linkage. This essentially asks “what happens in the world where (a given implementation of) the plan takes place: does the suggested problem actually diminish (or, in the case of disadvantages, increase)?” Technically, this does not directly ask “what are the effects of the plan’s implementation,” or “to what extent does the plan fix/cause this problem,” but I find that it’s easier/more natural to at least word my questions that way so long as I remember to take into account trajectory/uniqueness. Regardless of how you phrase the questions, the analysis here ultimately evaluates "world with plan" so that one can compare with “world without plan” (assessed in the trajectory component): in theory (i.e., supposing you’ve controlled for other variables), any differences between these two worlds represents a causal effect of the plan. If the worlds are the same and the outcome supposed by the advantage/disadvantage does not materialize in either world, this means that the advantage/disadvantage lacks linkage to the plan. In contrast, if the worlds are the same and the supposed outcome does materialize in both worlds, it means that the advantage/disadvantage has problems in the trajectory segment of the analysis—or to put it more naturally, the advantage/disadvantage “lacks uniqueness” to the plan (hence why I sometimes use the term “uniqueness” instead of “trajectory”).
- Significance. This is where normative/moral analysis finally enters. Significance essentially asks “So what? What is the moral difference between ‘world without the action’ and ‘world with (a given implementation of) the action’?” Technically, part of this is where the debates over different moral frameworks come into play (e.g., deontology vs. utilitarianism, average utilitarianism vs. total utilitarianism), although like with other questions there is no “requirement” that you relitigate that specific issue every time. To be clear, though, I do think that this can be applied to a non-consequentialist moral framework like deontology or virtue ethics: a deontological disadvantage, for example, could take the form “inaction means we do not do X; this plan would be implemented in a way which involves doing X; doing X violates deontology which is bad.”
Putting all of these pieces together and summarizing, the framework breaks down advantages {disadvantages} as follows: Summary claim: This plan would cause X, which is good {bad}.
- Trajectory/uniqueness: X will not happen without this plan;
- Implementation: This plan involves a set of actions which can/would be implemented;
- Linkage: Implementation of this plan will result in X;
- Significance: X is a good {bad} thing.
Theoretical implications/contentions of the TUILS framework
In asserting that the four factors above are collectively exhaustive, this framework posits that every claimed advantage/disadvantage implicitly or explicitly relies on all four of these components, and any kind of challenge to a claimed advantage/disadvantage exclusively relates to one or more of these four concepts.[2] As a result, this analytical framework theoretically could be applied to any decision (i.e., not just government policy analysis), although it obviously is not always the best way to analyze choices (such as when trying to make split-second decisions based on instinct).
Further extending the points mentioned above: an important contention of this framework is that every component is necessary; similar to the idea of a zero in a multiplication equation, if any of the components is completely lacking (e.g., the implementation will completely fail), then every advantage/disadvantage that relies on that assumption will fall regardless of how accurate the other components are. In reality, the formal analysis is more complex than simplistic linear multiplication since, for example, achieving only 50% of the assumed degree of change (e.g., reduction of a pollutant) might not linearly translate to a 50% improvement (e.g., it might fall above or below a critical threshold for health effects). However, at the heuristic/informal level one can often make quick rough estimates as to the impact of, for example, a plan only being half as effective at reducing a problem as was originally assumed.
The TUILS framework vs. the INT framework
I think it is helpful to briefly compare and contrast TUILS with the INT framework:
- Both frameworks try to break larger, more-complex questions into smaller, more-manageable pieces.
- The two frameworks both have varying levels of formality in usage: the informal approaches can serve as heuristics that efficiently spark questions; the formal approaches try to accurately identify the logic/steps for the analysis.
- The frameworks’ components are loosely similar, such as importance/significance and tractability/linkage.
- Crucially, however, TUILS and the INT framework have different “units of analysis”: the INT framework evaluates problem areas whereas TUILS breaks down decisions’ advantages and disadvantages.
- Partially as a result of this, I consider that TUILS lacks some of the inaccuracies/problems of the INT framework. To be clear, I still consider the INT framework to be helpful as a simple/easy heuristic, but it does typically involve some imperfect assumptions, such as that “a focus area is more promising the more [...] neglected [...] it is” when in fact a cause area may be so neglected that small initial investments may be unable to surpass impact thresholds (e.g., “startup costs”). It seems that many of these issues stem from the fact that the INT framework is generally trying to inform a variety of possible decisions by analyzing problems rather than by directly analyzing the decisions themselves (unlike TUILS).
Examples of the TUILS framework being applied
The following are some simplified examples of the framework’s usage—including the question/objection generation process—mainly for further illustration/clarification of how the framework works (rather than trying to primarily illustrate its value).
-
Consider lobbying for some policy change in a developing country—for example, on tobacco policy. Suppose that the proposal is to fund an advocacy campaign that would push for tighter controls on cigarettes, with the primary claimed advantage being “it will (increase the likelihood of passing legislation that will) reduce the mortality caused by smoking.” To evaluate this advantage, you would likely face questions such as:
- Trajectory/Uniqueness: What would happen without this intervention? (Imagine for example that someone claims the campaign is likely to work because there is a “growing wave of support” for the reform: this might mean that the reform—or a slightly less strong version of the reform—already has a decent chance of passing. As part of this, it may be the case that the advocacy campaign will already receive sufficient funding.)
- Implementation: Do we actually have the necessary funding and can we actually meet the timeline outlined by the plan? (For example, are there any restrictions on foreign funding that have not been accounted for?)
- Linkage: Supposing that the plan is implemented (or, for a given implementation of the plan), what is the resulting likelihood that the desired reform will be signed into law—and subsequently, how effective will the desired reform be in reducing mortality caused by smoking (which introduces a recursion of this framework).
- Significance (assuming a utilitarian moral framework): How does “reducing mortality caused by smoking” translate to changes in wellbeing? If one considers the goal to simply be reducing mortality caused by smoking, that might be achieved, but it’s not guaranteed that achieving that goal will lead to an increase in wellbeing, such as is more-directly measured by a metric like QALYs. (For example, it’s possible that there are other widespread environmental problems that significantly reduce the effect of smoking mortality reduction on QALYs.)
-
When choosing a research topic, one of the most prominent justifications is discovering and/or proliferating useful knowledge about the issue.[3] When evaluating this justification for a variety options, some of the major questions under each component would be as follows:
- Trajectory/Uniqueness: What aspect of this issue is currently unknown or misunderstood? If I don’t explain/discover this, will someone else do it eventually? When? How accurate do I expect their research to be?
- Implementation: Will I actually be able to devote sufficient time to this? Do I have the resources I need to do this research in the envisioned way?
- Linkage: (Under a given implementation) what is the likelihood that my research will be successful and/or to what extent?
- Significance: How valuable would it be to discover/explain the issue? Will the research still be relevant/actionable by the time I’m finished?
-
When trying to evaluate the direct-impact benefits from working in a specific position (e.g., when deciding between multiple preliminary/contingent offers), some of the questions to consider would include:
-
Trajectory/Uniqueness: If I don’t work in this field/position, what will happen? Who else will be working on this issue/in this position?[4]
-
Implementation: What are the actual details of this position in question and can I actually get this position, or will I only end up as something lower than or adjacent to the position I have in mind?
-
Linkage: What will happen if I receive this position? What will I be able to do within this position? Will I be an effective worker in this field?
-
Significance: So what if I improve the work in this field by a given amount? Is it an important line of work?
-
Reasons to learn/remember/use
To be honest, I’ve come across plenty of frameworks or aids for decision-making and analysis (I even have a mini-book covering dozens of them): some are good, but some are sketchy at best, and more broadly there are just so many that I think it’s natural to be skeptical of “yet another framework/model.” Despite this, I think that at least two claimed characteristics seem to make it worth deeper scrutiny/attention in general:
- The fundamental logic is sound and relatively straightforward: the analysis breaks down pros and cons into smaller pieces without leaving any conceptual gaps (i.e., it is collectively exhaustive), and thus if used correctly it still does what it is meant to do (analyze pros and cons). This obviously still leaves room for user/input error, but it stands in contrast to simplistic frameworks like the Eisenhower Matrix, which may offer an occasionally-handy heuristic for when to do vs. delegate vs. delay things but can also produce incorrect conclusions with correct inputs: for example, just because something is “urgent” but “unimportant” doesn’t necessarily mean the best way to handle it is to delegate it, since those two factors don’t address questions like “how long does this task take” and “do I have anyone to delegate this to?” As another comparison, I like Spencer Greenberg’s FIRE model for when to use intuition in decision-making, but it still seems that there may be exceptions, such as where an “evolutionary” choice is better made by analytical thinking—unless you were to define things in circular terms (i.e., a choice is “evolutionary” iff it is related to evolution and should be made by intuition), which undermines the purpose of the framework. I also want to reemphasize my contention that the components do not overlap/are mutually exclusive as opposed to there being one component that is just an awkward catch-all (although I do recognize there may be some room for debate regarding implementation and linkage).
- The practical applicability is relatively wide: because the TUILS framework is not built around a specific area or context (e.g., business, philanthropy), in theory it could be applied wherever you could do normative/prescriptive analysis, which technically could be said to apply to every decision. Of course, that doesn’t mean that it’s a good idea to try to apply this everywhere: some situations demand instinctive decisions, for example. However, in a decent number of situations where it makes sense to do pro-con analysis it also makes sense to do this analysis, and that covers a lot of non-trivial decisions—including a lot of decisions regarding philanthropic interventions, career paths, research/writing topics, community organizing, government policy analysis, etc.
For what it’s worth, I also personally think that the framework’s basic idea (e.g., the four overarching questions) is relatively simple/easy to learn, but I admit I am probably not the best judge of its complexity since I am already familiar with it. Still, I think it’s near the complexity level of the INT framework, which I consider to be fairly simple.
General/Indirect Justifications (e.g., reference class justifications)
Partially building on the observations above, the following points are some inferential or otherwise indirect arguments that I think are worth mentioning before I get into the more-focused/specific justifications. Generally speaking:
- It’s relatively common advice to try to break down complex questions into smaller, more-solvable questions (especially when you aren’t leaving out important parts of the larger question), and that is what this framework does.
- Many people in the EA community (myself included) think that it is worthwhile to learn/understand the INT framework on a basic level, and it shares some noteworthy similarities with TUILS, so there is some indirect reason to think this framework would also be helpful to at least casually learn/understand.
- This framework helps to add rigor and structure to an important step of the decision-making process (cost/benefit estimation), and greater rigor and structure are often good when making consequential decisions that do not require speed/efficiency.
Specific/Narrower Justifications
Moving on from these general arguments and looking more narrowly at specific arguments (most of which are examples/instances of the above points), I would contend that this framework can help with[5]:
- Challenging common biases and flagging some oversights. Especially when someone already likes {dislikes} some idea (e.g., a policy or project proposal), it is easy to just uncritically accept advantages {disadvantages} that fit with their overall beliefs. Walking through this framework doesn’t eliminate bias, but it does prompt people to at least ask basic questions challenging their assumptions, such as “is this advantage {disadvantage} actually unique to the plan, or will it mostly occur regardless of whether or not the plan is implemented”, “does this plan actually have the characteristics I assume it does, and will/can it be implemented the way I’m assuming”, “will this plan actually solve {cause} the problem to the extent that I’m assuming it will”, and “assuming that this plan does have this effect, to what extent does that actually matter?” Expanding on this further: even though the overarching question of each component may be intuitive, within each component there are many sub-questions that sometimes show up and may not be as immediately obvious in the moment (e.g., under trajectory/uniqueness: “are there any future changes that will occur that will make this obsolete”).
- Developing clearer and more-consistent terminology, both
- Interpersonally: When trying to have constructive discussions/debates, participants can understand each other more accurately and/or efficiently if they have common terminology (consider for example how much easier it is to discuss cause areas when people are familiar with the INT framework) and they are making their assumptions/points more explicit (e.g., “I think it will only be half as beneficial as you presume, because I assess that roughly 1/3rd of the problem will already have been resolved by other decisions/trends, and the plan will only fix 3/4ths of the remaining problem”); and
- Intrapersonally: Especially in my time doing competitive debate, I’ve found that this framework helps me categorize and compare arguments—to essentially keep a mental filing cabinet of questions, examples, and claims—which makes it easier to use my past experiences with similar arguments to think of how to respond to arguments (both at the conceptual “what to say” and rhetorical “how to say it” levels).
- Generating ideas for how to compare options: When comparing between options (e.g., project topics), I have sometimes heard people give advice that may have sounded natural/good but really just amounted to focusing on a single factor, like “figure out what is the most important problem, then focus on that.” The issue is that such advice sometimes ignores the possibility that these “most important problems” are not very tractable or are already being addressed. Additionally, sometimes these single-factor approaches still leave you with a few options that are all loosely similar in that one factor. TUILS, much like the INT framework, is helpful for sparking ideas on what other factors/questions to consider when comparing options.
Potential (generic[6]) downsides
The following is a non-exhaustive list of potential downsides to learning/using this framework:
- It likely takes some time to learn and become comfortable with using them, which likely involves some opportunity cost.
- It may slow down your decision-making process when intuition/assumptions would have given you the same answer.
- As with many other frameworks, especially when used incorrectly it may give a sense of unfounded legitimacy to an evaluation.
- Although it does not inherently preclude you from using other frameworks, whether because of time constraints or some other reason you may choose not to take a different analytical approach that may have been better suited to the situation.
Conclusion
Ultimately, the TUILS framework only addresses part of the decision-making process and isn’t always optimal to use (especially when a basic pro-con analysis is already superfluous), but unlike many related frameworks (including the INT framework) it doesn’t have inbuilt imperfect assumptions or other oversimplifications: it simply breaks down the logical concept of pros and cons into their conceptual components.
I’ve analyzed and used the TUILS framework for years, and based on my personal experiences I would contend that even learning the basic skeleton (i.e., the main overarching question/idea for each component) can help with catching oversights and mitigating confirmation bias—even if only by helping you catch the mistakes faster than you would have otherwise. More generally though, I think that just as people in the EA community have found that the INT framework helps to normalize and semi-standardize language around the concepts of importance, neglectedness, and tractability, it seems that shared familiarity with the TUILS framework could similarly help decision-making discussions/debates.
That being said, I am still definitely open to suggestions for different component names as well as other suggestions or criticisms more generally (in fact, getting feedback is a major reason for this post).
Notes
The name is just an acronym of the component names. One may notice that I could have chosen the acronym “UTILS” which may seem to fit better, but my two main concerns with this were: 1) I did not want this to be so specifically associated with utilitarianism since it (arguably) does not require using a utilitarian framework with it; 2) I thought it might come across as a bit corny or even “too convenient.” Ultimately, I am still very open to taking suggestions/feedback on both the individual components’ names as well as names for the overall framework. (If only there were some EA-aligned organization that helped people name things) ↩︎
It is possible to make responses such as “this disadvantage is wholly true, but we still outweigh with our two advantages”, but this is just an argument at the “impact calculus” level; it is not challenging the argument itself. Additionally, I believe that technically speaking, one could model this situation as the component of significance being expanded to weigh multiple arguments at the same time. ↩︎
There may be other reasons for choosing certain research topics, such as signaling, generating interest in a topic, fulfilling the wishes of a benefactor, etc. Each of these justifications would be valid subjects of their own TUILS evaluation—although there likely will be important questions that cut across justifications (e.g., many justifications may rely on the likelihood that some information is actually discovered/proliferated). ↩︎
Technically, the concept of replacement (i.e., “I would be taking the position that someone else would have taken” as opposed to “I would be an additional person working on this team”) is a separate disadvantage if the implication is that if the decision is implemented, the other person will not get the job. However, it might be easier/faster to just fold such a disadvantage into your analysis of the advantage. ↩︎
In the past, when explaining this framework I have informally applied the framework to learning/using the framework—i.e., I loosely broke down the following justifications along the framework’s four components. In this post however, I decided not to do that in the main text since I figured it may be better to just explain the points in a standard/familiar way and note here that if someone wants I can go through those steps in a comment/followup. Still, I’ll preemptively say that a rough outline for the first justification for learning/using this framework (to mitigate biases/oversights) is “when and how often do I make these kinds of mistakes”, “would I be able to remember to apply this framework (and apply it correctly/not forget parts of it) at those times”, “would walking through the framework actually prompt me to recognize and correct my mistakes”, and “is the degree of mistake mitigation actually significant (e.g., how significant are the decisions in question)?” ↩︎
One could say that these downsides are generically applicable to basically any decision-making framework, but I still felt I should note them if only to make it clear that I acknowledge them. ↩︎
Strongly agree that the focus on Implementation is critical, and can easily be missed by those only superficially acquainted with I/N/T analyses. It's also good to focus on linkage - see Pearl's amusing / correct paper on why applying scientific knowledge to actual decisions is useless. Overall, 9/10 on content.
At the same time, I think this post would be greatly improved with editing and simplifying the arguments. (I tend to need help with the same things; structure, leaving things out, making a clear case in the introduction, etc. So I very, very often ask for editing help.) I would give the post itself an unfortunate 3/10 on clarity of presentation, which is unfortunate given what I think is the usefulness of the argument.
All that said, I upvoted this, but am unsurprised, and nonetheless disappointed, to see that someone / other people have downvoted this without saying why.