Hide table of contents

This is a crosspost for Controlling for a thinker’s big idea by Magnus Vinding.

This post is an attempt to write up what I consider a useful lesson about intellectual discourse. The lesson, in short, is that it is often helpful to control for a thinker’s big idea. That is, a proponent of a big idea may often overstate the plausibility or significance of their big idea, especially if this thinker’s intellectual persona has become strongly tied to that idea.

This is in some sense a trivial lesson, but it is also a lesson that seems to emerge quite consistently when one does research and tries to form a view on virtually any topic. Since I have not seen anyone write about this basic yet important point, I thought it might be worth doing so here (though others have probably written about it somewhere, and awareness of the phenomenon is no doubt widespread among professional researchers).

Typical patterns of overstatement, overconfidence, and overemphasis

The tendency for a thinker to overstate their big idea often takes the following form: in a condition where many different factors contribute to some given effect, a thinker with a big idea can be inclined to highlight one particular factor, and to then confidently present this one factor as though it is the only relevant one, in effect downplaying other plausible factors.

Another example might be when a thinker narrowly advocates their own approach to a particular problem, whereby they quietly neglect other approaches that may be similarly, or even more, helpful.

In many cases, the overstatement mostly takes the form of skewed emphasis and framing rather than explicit claims about the relative importance of different factors or approaches.

Analogy to sports fans

An illustrative analogy might be sports fans who are deeply invested in their favorite team. For example, if a group of football fans argue that their favorite team is objectively the best one ever, we would rightly be skeptical of this assessment. Likewise, if such fans complain that referee calls against their team tend to be deeply unfair, we should hardly be eager to trust them. The sports fans are not impartial judges on these matters.

While we might prefer to think that intellectuals are fundamentally different from dedicated sports fans, it seems that there are nevertheless some significant similarities. For instance, in both cases, identity and reputation tend to be on the line, and unconscious biases often push beliefs in self-serving directions.

Indeed, across many domains of life, we humans frequently act more like sports fans than we would like to admit. Hence, the point here is not that intellectuals are uniquely similar to sports fans, but simply that intellectuals are also — like everyone else — quite like sports fans in some significant respects, such as when they cheer for their own ideas. (An important corollary of this observation is that we usually need to consult the work of many different thinkers if we are to acquire a balanced picture of a given issue — an insight that is, of course, also widely appreciated among professional researchers.)

I should likewise clarify that my point isn’t that scholars with a big idea cannot be right about their big idea; sometimes they are. My point is merely that if a thinker is promoting some big idea that has become tied to their identity and reputation, then we have good reason to be a priori skeptical of this thinker’s own assessment of the idea. (And, of course, this point about a priori skepticism also applies to me, to the extent that I am advancing any particular idea, big or small.)

Controlling for the distorting influence of overconfidence and skewed emphases

Why do people, both scholars and laypeople, often state their views with excessive confidence? Studies suggest that a big part of the reason is that overconfidence quite simply works at persuading others.

Specifically, in studies where individuals can earn money if they convince others that they did well in an intelligence test, participants tend to display overconfidence in order to be more convincing, and this overconfidence in turn makes them significantly more persuasive to their audience. In other words, overconfidence can be an effective tool for influencing and even outright distorting the beliefs of receivers.

These findings suggest that we actively need to control for overconfidence, lest our minds fall for its seductive powers. Similar points apply to communication that emphasizes some ideas while unduly neglecting others. That is, it is not just overconfidence that can distort the beliefs of receivers, but also the undue neglect of alternative views, interpretations, approaches, and so on (cf. the availability heuristic and other salience-related biases).

Examples of thinkers with big ideas

Below, I will briefly list some examples of thinkers who appear, in my view, to overstate or overemphasize one or more big ideas. I should note that I think each of the thinkers mentioned below has made important contributions that are worth studying closely, even if they may at times overstate their big ideas.

Kristin Neff and self-compassion

Kristin Neff places a strong emphasis on self-compassion. In her own words: “I guess you could say that I am a self-compassion evangelist”. And there is indeed a large literature that supports its wide-ranging benefits, from increased self-control to greater wellbeing. Even so, it seems to me that Neff overemphasizes self-compassion relative to other important traits and constructs, such as compassion for others, which is also associated with various benefits. (In contrast to Neff, many psychologists working in the tradition of compassion-focused therapy display a more balanced focus on compassion for both self and others, see e.g. Gilbert et al., 2011; Kirby et al., 2019.)

One might object that Neff specializes in self-compassion and that she cannot be expected to compare self-compassion to other important traits and constructs. That might be a fair objection, but it is also an objection that in some sense grants the core point of this post, namely that we should not expect scholars to provide a balanced assessment of their own big ideas (relative to other ideas and approaches).

Jonathan Haidt and the social intuitionist model of moral judgment

Jonathan Haidt has prominently defended a social intuitionist approach to moral judgment. Simply put, this model says that our moral judgments are almost always dictated by immediate intuitions and then later rationalized by reasons.

Haidt’s model no doubt has a lot of truth to it, as virtually all of his critics seem to concede: our intuitions do play a large role in forming our moral judgments, and the reasons we give to justify our moral judgments are often just post-hoc rationalizations. The problem, however, is that Haidt appears to greatly understate the role that reasons and reasoning can play in moral judgments. That is, there is a lot of evidence suggesting that moral reasoning often does play an important role in people’s moral judgments, and that it frequently plays a larger role than Haidt’s model seems to allow (see e.g. Narvaez, 2008; Paxton & Greene, 2010; Feinberg et al., 2012).

David Pinsof and hidden status motives

David Pinsof emphasizes the hidden status motives underlying human behavior. In a world where people systematically underestimate the influence of status motives, Pinsof’s work seems like a valuable contribution. Yet it also seems like he often goes too far and overstates the role of status motives at the expense of other motives (which admittedly makes for an interesting story about human behavior). Likewise, it appears that Pinsof makes overly strong claims about the need to hide status motives.

In particular, Pinsof argues that drives for status cannot be openly acknowledged, as that would be self-defeating and undermine our status. Why? Because acknowledging our status drives makes us look like mere status-seekers, and mere status-seekers seem selfish, dishonest, and like they have low status. But this seems inaccurate to me, and like it assumes that humans are entirely driven by status motives, while simultaneously needing to seem altogether uninfluenced by status motives. An alternative view is that status motives exert a significant, though not all-powerful, pull on our behavior, and acknowledging this pull need not make us appear selfish, dishonest, or low-status. On the contrary, admitting that we have status drives (as everyone does) may signal a high level of self-awareness and honesty, and it hardly needs to paint us as selfish or low-status (since again, we are simply acknowledging that we possess some basic drives that are shared by everyone).

It is also worth noting that Pinsof seems to contradict himself in this regard, since he himself openly acknowledges his own status drives, and he does not appear to believe that this open acknowledgment is self-defeating or greatly detrimental to his social status, perhaps quite the contrary. Indeed, by openly discussing both his own and others’ hidden status motives, it seems that Pinsof has greatly boosted his social status rather than undermined it.

Robin Hanson and grabby aliens

Robin Hanson has many big ideas, and he seems overconfident about many of them, from futarchy to grabby aliens. To keep this section short, I will focus on his ideas related to grabby aliens, which basically entail that loud and clearly visible aliens explain why we find ourselves at such an early time in the history of the universe, as such aliens would prevent later origin dates.

To be clear, I think Hanson et al.’s grabby aliens model is an important contribution. The model makes some simplifying assumptions, such as dividing aliens into quiet aliens that “don’t expand or change much” and loud aliens that “visibly change the volumes they control”, and Hanson et al. then proceed to explore the implications of these simplifying assumptions, which makes sense. Where things get problematic, however, is when Hanson goes on to make strong statements based on his model, without adding the qualification that his conclusions rely on some strong and highly simplifying assumptions. An example of a strong statement is the claim that loud aliens are “our most robust explanation for why humans have appeared so early in the history of the universe.”

Yet there are many ways in which the simplifying assumptions of the model might be wrong, and which Hanson seems to either ignore or overconfidently dismiss. To mention just two: First, it is conceivable that much later origin dates are impossible, or at least prohibitively improbable, due to certain stellar and planetary conditions becoming highly unfavorable to complex life in the future (cf. thegreatatuin, 2016; 2017). Since we do not have a good understanding of the conditions necessary for the evolution of complex life, it seems that we ought to place a significant probability on this possibility (while also placing a significant probability on the assumption that the evolution of complex life will remain possible for at least a trillion years).

Second, Hanson et al.’s basic model might be wrong in that expansionist alien civilizations could generally converge to be quiet, in the sense of not being clearly visible; or at least some fraction of expansionist civilizations could be quiet (both possibilities are excluded by Hanson et al.’s model). This is not a minor detail, since if we admit the possibility of such aliens, then our observations do not necessarily give us much evidence about expansionist aliens, and such aliens could even be here already. Likewise, quiet expansionist aliens could be the explanation for early origin dates rather than loud expansionist ones.

When considering such alternative explanations, it becomes clear that the claim that loud aliens explain our seemingly early position in time is just one among many hypotheses, and it is quite debatable whether it is the most plausible or robust one.

David Pearce and the abolitionist project

David Pearce is another thinker who has many big and profound ideas. By far the biggest of these ideas is that we should use biotechnology to abolish suffering throughout the living world, what he calls the abolitionist project. This is an idea that I strongly support in principle. Yet where I would disagree with Pearce, and where it seems to me that he is overconfident, is when it comes to the question of whether pushing for the abolitionist project is the best use of marginal resources for those seeking to reduce suffering.

Specifically, when we consider the risk of worst-case outcomes due to bad values and political dynamics, it seems likely that other aims are more pressing, such as increasing the priority that humanity devotes to the reduction of suffering, as well as improving our institutions such that they are less prone to worst-case outcomes (see also Tomasik, 2016; Vinding, 2020, ch. 13; 2021; 2022). At the very least, it seems that there is considerable uncertainty as to which specific priorities are most helpful for reducing suffering.

Other examples

Some other examples of thinkers who appear to overstate their big ideas include Bryan Caplan and Jason Brennan with their strong claims against democracy (see e.g. Farrell et al., 2022), as well as Paul Bloom when he makes strong claims against the utility of emotional empathy (see e.g. Christov-Moore & Iacoboni, 2014; Ashar et al., 2017; Barish, 2023).

Indeed, Bloom’s widely publicized case against empathy is a good example of how the problem is not confined to just a single individual who overstates their own big idea, as there is also often a tendency among publishers and the media to push strong and dramatic claims that grab people’s attention. This can serve as yet another force that pushes us toward hearing strong claims and simple narratives, and away from getting sober and accurate perspectives, which are often more complex and nuanced. (For example, contrast Bloom’s case against empathy with the more complex perspective that emerges in Ashar et al., 2017.)

Concluding note: The deeper point applies to all of us

Both for promoters and consumers of ideas, it is worth being wary of the tendency to become unduly attached to any single idea or perspective (i.e. attached based on insufficient reasons or evidence). Such attachment can skew our interpretations and ultimately get in the way of a commitment to form more complete and informed perspectives on important issues.

60

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 12:26 PM

This is a good point. The two other examples which seem salient to me:

  1. Deutsch's brand of techno-optimism (which comes through particularly clearly when he tries to reason about the future of AI by saying things like "AIs will be people, therefore...").
  2. Yudkowsky on misalignment.

I'm surprised that you didn't put down signaling as Robin Hanson's "big idea".

Yeah, it would make sense to include it. :) As I wrote "Robin Hanson has many big ideas", and since the previous section was already about signaling and status, I just mentioned some other examples here instead. Prediction markets could have been another one (though it's included in futarchy).

Hi Chris,

For reference, I think you are referring to the ideas expressed in the book The Elephant in the Brain. Maybe they do not qualify as "big ideas" because they had already been widely discussed before the book.

I took "big idea" to mean an idea that is a) key to someone's worldview and b) somewhat particular to them, in that they make use of it to an unusual degree, even if it isn't entirely original to them.

One aspect is that we might expect people who believe unusually strongly in an idea to be more likely to publish on it (winner's curse/unilateralist's curse).

"Hanson et al.’s basic model might be wrong in that expansionist alien civilizations could generally converge to be quiet, in the sense of not being clearly visible; or at least some fraction of expansionist civilizations could be quiet (both possibilities are excluded by Hanson et al.’s model)."

A large part of our main paper considers the ratio of quiet to loud aliens, and allows that ratio to be very large. Thus it is not at all true that that we ignore the possibility of many quiet civs. We also explicitly consider the possibility that we haven't yet noticed large expansive civs, and we calculate the angle lengths of borders we might see in the sky in that case. 

A similar critique has been made in Friederich & Wenmackers' article "The future of intelligence in the Universe: A call for humility", specifically in the section "Why FAST and UNDYING civilizations may not be LOUD".

Thus it is not at all true that that we ignore the possibility of many quiet civs.

But that's not the claim of the quoted text, which is explicitly about quiet expansionist aliens (e.g. expanding as far and wide as loud expansionist ones). The model does seem to ignore those (and such quiet expansionists might have no borders detectable by us).

Curated and popular this week
Relevant opportunities