Hide table of contents

I've been thinking about tacit linked premises on and off for the last few years in the context of arguments about AI and longtermism. They have seemed difficult to reason about without carefully going over individual arguments with a fine toothed comb because I hadn't come up with a good search strategy. Since I wanted a test case for using chatgpt for research, I decided to try working on this particular search problem. I was able to develop a list of related key terms, a list of textbooks rich in thought experiments and generate a list of some key examples.

Prompt: related terms for tacit linked premises
tacit linked premises
dependent premises
background belief
implicit premises
hidden assumption

Prompt: textbooks that cover [keyterms]

This was a long list that I then libgen'd and searched for all keyterms.

Following are my notes on some of the interesting patterns that surfaced or seem common to me.

Marginal vs. Universal Moral Arguments

The 'if everyone followed this rule' problem when what's actually on offer is you, on the margin, following the rule.

Rule Enforcement

Many problems seem to have a motte and bailey of not only holding to the moral rule yourself, but involving you in enforcement against those who do not hold to it.

Comparison of Moral Goods

Many problems seem to hand wave away that comparison of moral goods falls into the same problems as inter-agent utility comparison in general, instead making some tacit moral symmetry arguments.

Underspecified Costs

Cost of inference not acknowledged. Implications that people not spending time to work out implications of their own beliefs are acting immorally. Emotional and opportunity costs of living by unusual rules elided. Costs of reducing uncertainty about key parameters elided.

Emotional Pain as Currency

The implicit unit being how much distress various imaginary scenarios cause. Ignores the costs and second order effects from holding that as a valid form of moral inference.

Symmetry Arguments

Often assumed or underspecified along many dimensions through appeal to simple symmetries. Related to above via assumption of equivalent costs or that a moral duty will fall equally on people.

Invariance Assumptions with Far Inferential Distance

Relatedly, things far away in space or time are also far away in inferential cost and uncertainty. By transplanting arguments to distant places, times, or extreme conditions and assuming relations hold, question begging sometimes arises in assuming what the argument was originally trying to prove. Related to static world fallacy, argument in isolation problems, and hasty generalization.

Naturalist Assumption

That the things being compared in a moral quandary are in principle easy to bring under the same magisterium of analysis when this is unclear or what the thought experiment is trying to prove in the first place.

What You See is All There Is Fallacy

A fallacy in conjunction with proof by exhaustion: by dealing with all apparent objections, we are 'forced' to agree with the 'only remaining' conclusion, when the ontology of the examples hasn't proven that it logically exhausts the possible hypothesis space.

a concrete example is what seemed to happen with EA and the relation between the drowning pond argument and longermist arguments. Suppose a person encounters the drowning pond argument and accepts it as generally or directionally correct. They might then reflect as follows: "Ah, I was made aware of a good I would like to purchase (lives) that is greater in utility than my current marginal use of funds! But if I condition on having encountered such a thing, it stands to reason that there might be more such arguments. I should preserve my limited capital against discovering greater uses of funds! (To address the obvious: This recursive argument needs a base case to not loop infinitely, so they would need to think about some sort of explore exploit tradeoff.)


Related to motte and bailey, this is about changing definitions or changing connotations of a key term over the course of an argument in order to create a surprising juxtaposition in your intuition, which the argument relies on for its rhetorical force. It relies on our tacit intuition that words refer to the same things when referred to close together in time, and that the transformations applied to them don't break the key ways they causally interacted with other things.

People Should Follow Rules They Couldn't Have Generated Themselves

The issue with this is that just being able to understand an argument isn't the same as being able to understand its structure such that you can then rework it when appropriate or it leaves the domain of its validity.

Assumption of Non-Adversarial Dynamics

Assumes that the distribution of data that you see is drawn from the natural distribution and not specifically selected against your decision theory. Related to systemic bias: ignoring the conditions that gave rise to a situation.


On their own, tacit linked premises do not show that an argument is invalid, but two considerations:

  1. That the argument hasn't established iself to the degree of rigor that would support the strength or remit of the proposed moral responsibility.
  2. That we would roughly expect arguments with more tacit linked premises to be invalid on further consideration more often.

Further work: GPT can also be used to track down specific examples of various patterns


More posts like this