Hide table of contents

Epistemic status: speaking for myself and hoping it generalises

I don't like everyone that I'm supposed to like:

  • I've long thought that [redacted] was focused on all the wrong framings of the issues they discuss,
  • [redacted] is on the wrong side of their disagreement with [redacted] and often seems to have kind of sloppy thinking about things like this,
  • [redacted] says many sensible things but a writing style that I find intensely irritating and I struggle to get through; [redacted] is similar, but not as sensible,
  • [redacted] is working on an important problem, but doing a kind of mediocre job of it, which might be crowding out better efforts.

Why did I redact all those names? Well, my criticisms are often some mixture of:

  • half-baked; I don't have time to evaluate everyone fairly and deeply, and don't need to in order to make choices about what to focus on,
  • based on justifications that are not very legible or easy to communicate,
  • not always totally central to their point or fatal to their work,
  • kind of upsetting or discouraging to hear,
  • often not that actionable.

I want to highlight that criticisms like this will usually not surface, and while in individual instances this is sensible, in aggregate it may contribute to a misleading view of how we view our celebrities and leaders. We end up seeming more deferential and hero-worshipping than we really are. This is bad for two reasons:

  • it harms our credibility in the eyes of outsiders (or insiders, even) who have negative views of those people,
  • it projects the wrong expectation to newcomers who trust us and want to learn or adopt our attitudes.

What to do about it?

I think "just criticise people more" in isolation is not a good solution. People, even respected people in positions of leadership, often seem to already find posting on the Forum a stressful experience, and I think tipping that balance in the more brutal direction seems likely to cost more than it gains.

I think you could imagine major cultural changes around how people give and receive feedback that could make this better, mitigate catastrophising about negative feedback, and ensure people feel safe to risk making mistakes or exposing their oversights. But those seem to me like heavy, ambitious pieces of cultural engineering that require a lot of buy-in to get going, and even if successful may incur ongoing frictional costs. Here's smaller, simpler things that could help:

  • Write a forum post about it (this one's taken, sorry),
  • Make disagreements more visible and more legible, especially among leaders or experts. I really enjoyed the debate between Will MacAskill and Toby Ord in the comments of Are we living at the most influential time in history? ­– you can't come away from that discussion thinking "oh, whatever the smart, respected people in EA think must be right", because either way at least one of them will disagree with you!
    • There's a lot of disagreement on the Forum all the time, of course, but I have a (somewhat unfair) vibe of this as the famous people deposit their work into the forum and leave for higher pursuits, and then we in the peanut gallery argue over it.
    • I'd love it if there were (say) a document out there that Redwood Research and Anthropic both endorsed, that described how their agendas differ and what underlying disagreements lead to those differences.
  • Make sure people incoming to the community, or at the periphery of the community, are inoculated against this bias, if you spot it. Point out that people usually have a mix of good and bad ideas. Have some go-to examples of respected people's blind spots or mistakes, at least as they appear to you. (Even if you never end up explaining them to anyone, it's probably good to have these for your own sake.)

As is often the case, though, I feel more convinced of my description of the problem than my proposals to address it. Interested to hear others' thoughts.

Comments12
Sorted by Click to highlight new comments since:

We end up seeming more deferential and hero-worshipping than we really are.

I feel like this post is missing something. I would expect one of the strongest predictors of the aforementioned behaviors to be age. Are there any people in their thirties you know who are prone to hero-worshipping?

I don’t consider hero-worshipping an EA problem as such, but a young people problem. Of course EA is full of young people!

Make sure people incoming to the community, or at the periphery of the community, are inoculated against this bias, if you spot it. Point out that people usually have a mix of good and bad ideas. Have some go-to examples of respected people's blind spots or mistakes, at least as they appear to you.

This seems like good advice to me, but I expect it to benefit from being aware that you need to talk about these things to a young person because they are young.

This is a great point. I also think there's a further effect, which is that older EAs were around when the current "heroes" were much-less -impressive university students or similar. Which I think leads to a much less idealising frame towards them.

But I can definitely see that if you yourself are young and you enter a movement with all these older, established, impressive people... hero-worshipping is much more tempting.

Michael -- interesting point. EA is a very unusual movement in that the founders (Will MacAskill Toby Ord, etc) were very young when they launched the movement, and are still only in their mid-30s to early 40s. They got some guidance & inspiration from older philosophers (e.g. Derek Parfit, Peter Singer), but mostly they recruited people even younger than them into the movement ... and then eventually some older folks like me joined as well.

So, EA's demographics are quite youth-heavy, but there's also much less correlation between age and prestige in EA than in most moral/activist movements.

Hmm I find the correlation plausible but I'm not sure I'm moved to act differently by it. I wouldn't guess it's a strong enough effect that all young people need this conversation or all older people don't, so I'm still going to focus on what people say to judge whether they are making this mistake or not.

Also, to the extent that we're worried that the illusion of consensus harms our credibility, that's going to be more of a problem with older people, I expect.

I have a (somewhat unfair) vibe of this as the famous people deposit their work into the forum and leave for higher pursuits

I do think there's a big difference in how much various high-status people engage on the forum. And I think that the people who do engage feel like they're more "part of" the community... or at least that small part of it that actually uses the forum! It also gives them more opportunity to say stupid things and get downvoted, very humanising!

(An example that comes to mind is Oliver Habryka, who comments a lot on posts, not just his own, so feels quite present.)

But I definitely don't think there should be a norm that you should engage on the forum. It seems pretty unlikely to be the best use of your time, and can be a big time sink. Maybe you've got some actually useful object level work to do! Then posting and leaving us probably the right choice. So I don't think that's a good solution, although I think it could be somewhat effective if people did it.

Yeah I agree that for many people, not engaging is the right choice, I don't intend to suggest that all or even most technical debates or philosophical discussions happen here, just that keeping a sprinkling of them here helps give a more accurate impression of how these things evolve.

I agree with you that people should be much more willing to disagree, and we need to foster a culture that encourages this. No disagreement is a sign of insufficient debate, not a well-mapped landscape. That said, I think EA's in general should think way less about who said what and focus much more on whether the arguments themselves hold water.

I find it striking that all the examples in the post are about some redacted entity, when all of them could just as well have been rephrased to be about object level reality itself. For example: 

[redacted] is on the wrong side of their disagreement with [redacted] and often seems to have kind of sloppy thinking about things like this,

Could to me be rephrased to

Why I believe <stance on topic> is incorrect.

To me it seems that just having the debate on <topic> is more interesting than the meta debate of <is org's thinking on topic sloppy>. Thinking a lot about the views of specific persons or organizations has its time and place, but the right split of thinking about reality versus social reality is probably closer to 90/10 than 10/90.

I think it's not primarily a question of how much to disagree – as I said, we see plenty of disagreement every day on the forum. The issue I'm trying to address is:

  • with whom we disagree,
  • how visible those disagreements are,

and particularly I'm trying to highlight that many internal disagreements will not be made public. The main epistemic benefit of disagreement is there even in private, but there's a secondary benefit which needs the disagreement to be public, and that's the one I'm trying to address.

To me it seems that just having the debate on <topic> is more interesting than the meta debate of <is org's thinking on topic sloppy>.

The necessity of thinking about the second question is clearest when deciding who to fund, who to work for, who to hire, etc.

makes sense, agree completely 

I appreciate this point, but personally I am probably more like 70-30 for general thinking, with variance depending on the topic. So much of thinking about the world is trust-based. My views on historical explanations virtually never depend on my reading of primary documents - they depend on my assessment of what the proportional consensus of expert historians thinks. Same with economics, or physics, or lots of things. 

When I'm dealing directly with an issue, like biosecurity, it makes sense to have a higher split - 80-20 or 90-10 - but it's still much easier to navigate if you know the landscape of views. For something like AI, I just don't trust my own take on many arguments - I really rely a lot on the different communities of AI experts (such as they are). 

I think most people most of the time don't know enough about an issue to justify a 90-10 split in issue vs. view thinking. However,  I should note all this is regarding the right split of personal attention; for public debate, I can understand wanting a greater focus on the object level (because the view-level should hopefully be served well by good object-level work anyway).

I don't like everyone that I'm supposed to like:

There is nobody you, or any other EA, are 'supposed' to like. Apologies if I'm over-interpreting what's meant to be a turn of phrase, but I really want to push back on the idea that to be an EA you have to like specific people in the movement. Our movement is about the ideas it generates, not the people who generate them.[1] This is not to say that you can't admire or like certain people, I definitely do! But liking them is not a requirement in any sense to be a part of the movement, or at least it shouldn't be.

Make disagreements more visible and more legible, especially among leaders or experts.

I definitely agree with this. It's prima facie obvious that senior EAs won't align 100% with each other on every philosophical issue, and that' s ok. I think the Redwood/Anthropic idea is another good one. In general I think adversarial collaboration might be a good route to pursue is some cases - I know not everyone in the community is a fan of them but I feel like at the margin the Forum may benefit from a few more of them.

I also co-sign Mathias's post, that many of the [redacted] claims could probably be re-framed as object level concerns. But I also don't think you should be shy of saying you disagree with a point-of-view, and disagree with a high-level EA who holds that view, as long as you do so in good faith. A case in point, one of my favourite 80k podcast episodes is the blockbuster one with David Chalmers, but listening to 'Vulcan Trolley Problem' section I came away with the impression that David was spot on, and Rob's point of view (both Wiblin in this one, and Long in the recent one) wasn't that tenable in comparison. But my disagreement there is really an object level one, it doesn't prevent me from appreciating the podcast any less.

  1. ^

    Clarification: obviously people matter! I mean this in the sense that anyone should be able to come up with good ideas in EA, regardless of background, hierarchy, seniority etc.

Nobody says you should like everyone. No one says you should agree with everyone either, even those who are high-profile in the community. 

It sounds to me like this boils down to beware of logical fallacies, especially ad hominem. Don't criticize people; criticize ideas. Here are a couple of tactical things that have helped me:

  • Pretend that a different person (e.g. one for whom you have positive regard) was making the same point.
  • Distill their main points down to neutral tone language (I haven't tried it, but ChatGPT or other LLMs might be a good tool to do this).

Regarding the complaints you listed—wrong focus, sloppy thinking, irritating tone,  mediocre performance—I'm a big fan of leadership by filling the vacuum. If you see something that could be improved and no one else is stepping up,  maybe it's time for you to step up and take a stab at it. It might not be perfect, but it will be better than doing nothing about it.

Curated and popular this week
Relevant opportunities