Epistemic status: uncertain. I removed most weasel words for clarity, but that doesn't mean I'm very confident. Don’t take this too literally. I'd love to see a cluster analysis to see if there's actually a trend, this is a rough guess.[1] I'm interested in feedback on this; if it matches the intuitions of others reading this, and if there are important aspects I've missed or really messed up.
I separate a lot of interesting intellectuals into disagreeables and assessors.[2]
Disagreeables are highly disagreeable. They're quick to call out bullshit and are excellent at coming up with innovative ideas. Unfortunately, they produce a whole lot of false positives; they're pretty overconfident and wrong a great deal of the time. They're "idea detectors" with the sensitivity turned up to 11.
Disagreeables often either work alone or on top of a structure that believes (almost) everything they say.
Assessors are well-calibrated. If/when they participate in forecasting tournaments, they do really well. Assessors don't try to come up with many brilliant new ideas and don't spend particularly much effort questioning the most deeply held group assumptions, but they're awfully good at not being wrong. When they say X is true, X is indeed very likely to be true.
Both disagreeables and assessors are frustrated by mainstream epistemics, but for different reasons.
Disagreeables tend to make dramatic statements like,
- “There’s a big conspiracy that everyone is in on”
- “Everyone is just signaling all the time, they’re not even trying to be right”
- “This organization is a huge Ponzi scheme”
Assessors would make more calm clarifications like,
- “Yea, disagreeable person said X is a huge Ponzi scheme. There’s some truth there, but it’s a big oversimplification”
- “I’m quite sure that Y’s paper is unlikely to replicate, after closely looking at the related papers”
If the emperor really has no clothes, disagreeables would be the first to (loudly) call that out. If the emperor has a large set of broadly reasonable policies, but a few that are subtly mistaken, the assessors would be a better fit to diligently identify and explain these.
Some disagreeables include Socrates, Nietzsche, Wittgenstein, Nassim Taleb, Robin Hanson, early Steve Jobs, other tech founders, angry public figures, professional gurus of all kinds.
Assessors include David Hume[3], Bertrand Russell, Robert Caro, Scott Alexander, Superforecasters, some of late Steve Jobs, good CEOs, many not-particularly-angry politicians (Eisenhower/Clinton/Obama come to mind).
To the disagreeables, assessors seem like boring blankfaces and bureaucrats who maintain the suffocating status quo. To assessors, disagreeables often seem like reckless cult leaders who go around creating epistemic disarray.
Disagreeables value boldness and novelty; being the most interesting person in the room, making a big statement. Assessors value nuance, clarity, discernment. Getting things really right, even if the truth is tedious or boring.
I feel like Rationalists lean disagreeable, and Effective Altruists lean assessment.
The ideal is a combination of both. Have disagreeables come up with ideas and assessors assess them. But this is really hard to do!
Disagreeable normally don't exactly pronounce they are disagreeable; they often have compelling sounding arguments why everyone else is wrong (including all the assessors). Disagreeables often really truly and absolutely believe their inaccuracies. Meanwhile, accessors can be very soft-spoken and boring.[4] Great accessors who are unsure about X, even after a lot of research, can seem a whole lot like regular people who are unsure about X.
I wish we had a world where the people with great ideas are also all well-calibrated. But we don't live in that world. As such, I can rarely easily recommend interesting books, I need to condition my reviews.
"These books are very interesting, but I'd only recommend them if you're already familiar with the topic and surrounding debate, otherwise they might cause you more harm than good."
Or with people,
"These people have the cool ideas, but you can't take them too seriously. You instead have to wait for these other people to review the ideas, but honestly, you’ll likely be waiting a while."
Summary Table
Disagreeables | Assessors | |
Goal | Innovation | Not being wrong |
Traits | Disagreeable, innovative, interesting, individualistic, unreasonable, (occasionally) angry | Calibrated, nuanced, clear, strong discernment, reasonable, calm |
Examples | Socrates, Nietzsche, Wittgenstein, Nassim Taleb, Robin Hanson, early Steve Jobs, other tech founders, angry public figures, professional gurus of all kinds | David Hume[1], Bertrand Russell, Robert Caro, Scott Alexander, Superforecasters, some of late Steve Jobs, good CEOs, many not-particularly-angry politicians (Eisenhower/Clinton/Obama) |
Example Quotes | “There’s a big conspiracy that everyone is in on” “Everyone is just signaling all the time, they’re not even trying to be right” “This organization is a huge Ponzi scheme” | “Yes, disagreeable person said X is a huge Ponzi scheme. There’s some truth there, but it’s a big oversimplification” “I’m quite sure that Y’s paper is unlikely to replicate, after closely looking at the related papers” |
Failure modes | Wild overconfidence, convincing the public that personal"pet theories" are either widely accepted or self-evident | Too quiet to draw attention, focuses on being accurate on things that don't even matter that much |
Great for | Idea generation, calling out huge mistakes, big simplifications (when justified), tackling hard problems in areas with stigmas | Prioritization, filtering the ideas of disagreeables, catching many moderate-sized mistakes |
[1] This work has flavors of personality profiling tools like the Enneagram and Myers-Briggs. If you hate those things, you should probably be suspicious of this.
[2] These aren't all the types, but they're the main ones I think of. Another interesting type is "bulldogs", who dogmatically champion one or two ideas over several decades. Arguably "philosopher king/queen/ruler" is a good archetype, though it overlaps heavily with disagreeables and assessors.
[3] I'm not really sure about Hume, this is just my impression of him from several summaries.
[4] See this set of interviews of superforecasters for an idea. I think these people are interesting, but I could easily imagine overlooking them if I just heard them speak for a short period.
Disclaimer: I have disagreeable tendencies, working on it but biased. I think you're getting at something useful, even if most people are somewhere in the middle. I think we should care most about the outliers on both sides because they could be extremely powerful when working together.
I want to add some **speculations** on these roles in the context of the level at which we're trying to achieve something: individual or collective.
When no single agent can understand reality well enough to be a good principal, it seems most beneficial for the collective to consist of modestly polarized agents (this seems true from most of the literature on group decision-making and policy processes, e.g. Adaptive Rationality, Garbage Cans, and the Policy Process | Emerald Insight).
This means that the EA network should want people who are confident enough in their own world views to explore them properly, who are happy to generate new ideas through epistemic trespassing, and to explore outside of the Overton window etc. Unless your social environment productively reframes what is currently perceived as "failure", overconfidence seems basically required to keep going as a disagreeable.
By nature, overconfidence gets punished in communities that value calibration and clear metrics of success. Disagreeables become poisonous as they feel misunderstood and good assessors become increasingly conservative. The succesful ones of the two characters build up different communities in which they are high status and extremize one another.
To succeed altogether, we need to walk the very fine line between productive epistemic trespassing and conserving what we have.
Disagreeables can quickly lose status with assessors because they seem insufficiently epistemically humble or outright nuts. Making your case against a local consensus costs you points. Not being well calibrated on what reality looks like costs you points.
If we are in a sub-optimal reality, however, effort needs to be put into defying the odds and change reality. To have the chutzpah to change a system, it helps to ignore parts of reality at times. It helps to believe that you can have sufficient power to change it. If you're convinced enough of those beliefs, they often confer power on you in and of themselves.
Incrementally assessing baseline and then betting on the most plausible outcomes also deepens the tracks we find ourselves on. It is the safe thing to do and stabilizes society. Stability is needed if you want to make sure coordination happens. Thus, assessors rightly gain status for predicting correctly. Yet, they also reinforce existing narratives and create consensus about what the future could be like.
Consensus about the median outcome can make it harder to break out of existing dynamics because the barrier to coordinating such a break-out is even higher when everyone knows the expected outcome (e.g. odds of success of major change are low).
In a world where ground truth doesn't matter much, the power of disagreeables is to create a mob that isn't anchored in reality but that achieves the coordination to break out of local realities.
Unfortunately, to us who have insufficient capabilities to achieve their aims - to change not just our local social reality but the human condition - creating a cult just isn't helpful. None of us have sufficient data or compute to do it alone.
To achieve our mission, we will need constant error correction. Plus, the universe is so large that information won't always travel fast enough, even if there was a sufficiently swift processor. So we need to compute decentrally and somehow still coordinate.
It seems hard for single brains to be both explorers and stabilizers simultaneously, however. So as a collective, we need to appropriately value both and insure one another. Maybe we can help each other switch roles to make it easier to understand both. Instead of drawing conclusions for action at our individual levels, we need to aggregate our insights and decide on action as a collective.
As of right now, only very high status or privileged people really say what they think and most others defer to the authorities to ensure their social survival. At an individual level, that's the right thing to do. But as a collective, we would all benefit if we enabled more value-aligned people to explore, fail and yet survive comfortably enough to be able to feed their learnings back into the collective.
This is of course not just a norms questions, but also a question of infrastructure and psychology.
Thanks for the comment (this could be it's own post). This is a lot to get through, so I'll comment on some aspects.
I have some too! I think there are times when I'm fairly sure my intuitions lean overconfident in a research project (due to selection effects, at least), but it doesn't seem worth debiasing, because I'm going to be doing it for a while no matter what, and not writing about its prioritization. I feel like I'm not a great example of a disagreeable or an assessor, but I sometimes can lean ... (read more)