Wait, what? Why a post now about the EA's community's initial response to covid-19, based on an 80,000 Hours Podcast interview from April 2020? That will become clearer as you read on. There is a bigger lesson here.
Context: Gregory Lewis is a biorisk researcher, formerly at the Future of Humanity Institute at Oxford, with a background in medicine and public health. He describes himself as "heavily involved in Effective Altruism". (He's not a stranger here: his EA Forum account was created in 2014 and he has 21 posts and 6000 karma.)
The interview
Lewis was interviewed on the 80,000 Hours Podcast in an episode released on April 17, 2020. Lewis has some harsh words for how the EA community initially responded to the pandemic.
But first, he starts off with a compliment:
If we were to give a fair accounting of all EA has done in and around this pandemic, I think this would overall end up reasonably strongly to its credit. For a few reasons. The first is that a lot of EAs I know were, excuse the term, comfortably ahead of the curve compared to most other people, especially most non-experts in recognizing this at the time: that emerging infectious disease could be a major threat to people’s health worldwide. And insofar as their responses to this were typically either going above and beyond in terms of being good citizens or trying to raise the alarm, these seem like all prosocial, good citizen things which reflect well on the community as a whole.
He also pays a compliment to a few people in the EA community who have brainstormed interesting ideas about how to respond to the pandemic and who (as of April 2020) were working on some interesting projects. But he continues (my emphasis added):
But unfortunately I’ve got more to say.
So, putting things politely, a lot of the EA discussion, activity, whatever you want to call it, has been shrouded in this miasma of obnoxious stupidity, and it’s been sufficiently aggravating for someone like me. I sort of want to consider whether I can start calling myself EA adjacent rather than EA, or find some way of distancing myself from the community as a whole. Now the thing I want to stress before I go on to explain why I feel this way is that unfortunately I’m not alone in having these sorts of reactions.
... But at least I have a few people who talk to me now, who, similar to me, have relevant knowledge, background and skills. And also, similar to me, have found this community so infuriating they need to take a break from their social media or want to rage quit the community as a whole. ... So I think there’s just a pattern whereby discussion around this has been very repulsive to people who know a lot about the subject is, I think, a course for grave concern.
That EA's approval rating seems to fall dramatically with increasing knowledge is not the pattern you typically take as a good sign from the outside view.
Lewis elaborates (my emphasis added again):
And this general sense of just playing very fast and loose is pretty frustrating. I have experienced a few times of someone recommending X, then I go into the literature, find it’s not a very good idea, then I briefly comment going, “Hey, this thing here, that seems to be mostly ignored”, then I get some pretty facile reply and I give up and go home. And that’s happened to other people as well. So I guess given all these things, it seems like bits of the EA response were somewhat less than optimal.
And I think for ways it could have been improved were mostly in the modesty direction. So, for example, I think several EAs have independently discovered for themselves things like right censoring or imperfect ascertainment or other bits of epidemiology which inform how you, for example, assess the case fatality ratio. And that’s great, but all of that was in most textbooks and maybe it’d have saved time had those been consulted first rather than doing something else instead.
More on this consulting textbooks:
But typically for most fields of human endeavor, we have a reasonably good way which is probably reasonably efficient in terms of picking up the relevant level of knowledge and expertise. Now, it’s less efficient if you just target it, if you know in advance what you want to know ahead. But unfortunately, this area tends to be one where it’s a background tacit knowledge thing. It’s hard to, as it were, rapier-like just stab all the things, in particular, facts you need. And if you miss some then it can be a bit tricky in terms of having good ideas thereafter.
What's worse than inefficiency:
The other problems are people often just having some fairly bad takes on lots of things. And it’s not always bad in terms of getting the wrong answer. I think some of the interventions do seem pretty ill-advised and could be known to be ill-advised if one had maybe done one’s homework slightly better. These are complicated topics generally: something you thought about for 30 minutes and wrote a Medium post about may not actually be really hitting the cutting edge.
An example of a bad take:
So I think President Trump at the moment is suggesting that, as it were, the cure is worse than the disease with respect to suppression. ... But suppose we’re clairvoyant and we see in two years’ time, we actually see that was right. ... I think very few people would be willing to, well, maybe a few people listening to this podcast can give Trump a lot of credit for calling it well. Because they would probably say, “Well yeah, maybe that was the right decision but he chose it for the wrong reasons or the wrong epistemic qualities”. And I sort of feel like a similar thing sort of often applies here.
So, for example, a lot of EAs are very happy to castigate the UK government when it was more going for mitigation rather than suppression, but for reasons why, just didn’t seem to indicate they really attended to any of the relevant issues which you want to be wrestling with. And see that they got it right, but they got it right in the way that stopped clocks are right if you look at them at the right time of day. I think it’s more like an adverse rather than a positive indicator. So that’s the second thing.
On bad epistemic norms:
And the third thing is when you don’t have much knowledge of your, perhaps, limitations and you’re willing to confidently pronounce on various things. This is, I think, somewhat annoying for people like me who maybe know slightly more as I’m probably expressing from the last five minutes of ranting at you. But moreover, it doesn’t necessarily set a good model for the rest of the EA community either. Because things I thought we were about were things like, it’s really important to think things through very carefully before doing things. A lot of your actions can have unforeseen consequences. You should really carefully weigh things up and try and make sure you understand all the relevant information before making a recommendation or making a decision.
And it still feels we’re not really doing that as much as we should be. And I was sort of hoping that EA, in an environment where there’s a lot of misinformation, lots of outrage on various social media outlets, there’s also castigation of various figures, I was hoping EA could strike a different tone from all of this and be more measured, more careful and just more better I guess, roughly speaking.
More on EA criticism of the UK government:
Well, I think this is twofold. So one is, if you look at SAGE, which is the Scientific Advisory Group for Emergencies, who released what they had two weeks ago in terms of advice that they were giving the government, which is well worth a read. And my reading of it was essentially they were essentially weeks ahead of EA discourse in terms of all the considerations they should be weighing up. So obviously being worse than the expert group tasked to manage this is not a huge rap in terms of, “Well you’re doing worse than the leading experts in the country.” That’s fair enough. But they’re still overconfident in like, “Oh, don’t you guys realize that people might die if hospital services get overwhelmed, therefore your policy is wrong.” It seems like just a very facile way of looking at it.
But maybe the thing is first like, not having a very good view. The second would be being way too overconfident that you actually knew the right answer and they didn’t. So much that you’re willing to offer a diagnosis, for example, “Maybe the Chief Medical Officer doesn’t understand how case ascertainment works or something”. And it’s like this guy was a professor of public health in a past life. I think he probably has got that memo by now. And so on and so forth.
On cloth masks:
I think also the sort of ideas which I’ve seen thrown around are at least pretty dicey. So one, in particular, is the use of cloth masks; we should all be making cloth masks and wearing them.
And I’m not sure that’s false. I know the received view in EA land is that medical masks are pretty good for the general population which I’ll just about lean in favor of, although all of these things are uncertain. But cloth masks seem particularly risky insofar as if people aren’t sterilizing them regularly which you expect they won’t: a common thing about the public that you care about is actual use rather than perfect use. And you have this moist cloth pad which you repeatedly contaminate and apply to your face which may in fact increase your risk and may in fact even increase the risk of transmission. It’s mostly based on contact rather than based on direct droplet spreads. And now it’s not like lots of people were touting this. But lots on Twitter were saying this. They cite all the things. They seem not to highlight the RCT which cluster analyzed healthcare workers to medical masks, control, and cloth masks, and found cloth masks did worse than the control.Then you would point out, per protocol, that most people in the controlled arm were using medical masks anyway or many of them were, so it’s hard to tell whether cloth masks were bad or medical masks were good. But it’s enough to cause concern. People who write the reviews on this are also similarly circumspect and I think they’ve actually read the literature where I think most of the EAs confidently pronouncing it’s a good idea generally haven’t. So there’s this general risk of having risky policy proposals which you could derisk, in expectation, by a lot, by carefully, as it were, checking the tape.
More on cloth masks:
And I still think if you’re going to do this, or you’re going to make your recommendations based on expectation, you should be checking very carefully to make sure your expectation is as accurate as it could be, especially if there’s like a credible risk of causing harm and that’s hard to do for anyone, for anything. I mean cf. the history of GiveWell, for example, amongst all its careful evaluation. And we’re sort of at the other end of the scale here. And I think that could be improved. If it was someone like, “Oh, I did my assessment review of mask use and here’s my interpretation. I talked to these authors about these things or whatever else”, then I’d be more inclined to be happy. But where there’s dozens of ideas being pinged around… Many of them are at least dubious, if not downright worrying, then I’m not sure I’m seeing really EA live out its values and be a beacon of light in the darkness of irrationality.
Lewis' concrete recommendations for EA:
The direction I would be keen for EAs to go in is essentially paying closer attention to available evidence such as it is. And there are some things out there which can often be looked at or looked up, or existing knowledge one can get better acquainted with to help inform what you think might be good or bad ideas. And I think, also, maybe there’s a possibility that places like 80K could have a comparative advantage in terms of elicitation or distillation of this in a fast moving environment, but maybe it’s better done by, as it were, relaying on what people who do this all day long, and who have a relevant background are saying about this.
So yeah, maybe Marc Lipsitch wants to come on the 80K podcast, maybe someone like Adam Kucharski would like to come on. Or like Rosalind Eggo or other people like this. Maybe they’d welcome a chance of being able to set the record straight given like two hours to talk about their thing rather than like a 15 minute media segment. And it seems like that might be a better way of generally improving the epistemic waterline of EA discussions, rather than lots of people pandemic blogging, roughly speaking, and a very rapid, high turnaround. By necessity, there’s like limited time to gather relevant facts and information.
More on EA setting a bad example:
...one of the things I’m worried about, it’s like a lot of people are going to look at COVID-19, start want get involved in GCBRs. And sort of all these people are cautious, circumspect, lot’s of discretion and stuff like that. I don’t think 80Ks activity on this has really modeled a lot of that to them. Rob [Wiblin], in particular, but not alone. So having a pile of that does not fill me with great amounts of joy or anticipation but rather some degree of worry.
I think that does actually apply even in first order terms to the COVID-19 pandemic, where I can imagine a slightly more circumspect or cautious version of 80K, or 80K staff or whatever, would have perhaps had maybe less activity on COVID, but maybe slightly higher quality activity on COVID and that might’ve been better.
On epistemic caution:
I mean people like me are very hesitant to talk very much on COVID for fear of being wrong or making mistakes. And I think that fear should be more widespread and maybe more severe for folks who don’t have the relevant background who’re trying to navigate the issue as well.
The lesson
Lewis twice mentions an EA Forum post he wrote about epistemic modesty, which sounds like it would be a relevant read, here. I haven't read the whole thing yet, but I adore this bon mot in the section "Rationalist/EA exceptionalism":
Our collective ego is writing checks our epistemic performance (or, in candour, performance generally) cannot cash; general ignorance, rather than particular knowledge, may explain our self-regard.
Another bon mot a little further down, which is music to my weary ears:
If the EA and rationalist communities comprised a bunch of highly overconfident and eccentric people buzzing around bumping their pet theories together, I may worry about overall judgement and how much novel work gets done, but I would at grant this at least looks like fertile ground for new ideas to be developed.
Alas, not so much. What occurs instead is agreement approaching fawning obeisance to a small set of people the community anoints as ‘thought leaders’, and so centralizing on one particular eccentric and overconfident view. So although we may preach immodesty on behalf of the wider community, our practice within it is much more deferential.
This so brilliantly written, and so tightly compressed, I nearly despair, because I fear my efforts to articulate similar ideas will never approach this masterful expression.[1]
The philosopher David Thorstad corroborates Lewis' point here in a section of a Reflective Altruism blog post about "EA celebrities".
I'm not an expert on AI, and there is so much fundamental uncertainty about the future of AI and the nature of intelligence, and so much fundamental disagreement, that it would be hard if not impossible to meaningfully discern the majority views of expert communities on AGI in anything like the way you can for fields like epidemiology, virology, or public health. So, covid-19 and AGI are just fundamentally incomparable in some important way.
But I do know enough about AI — things that are not hard for anyone to Google to confirm — to know that people in EA routinely make elementary mistakes, ask the wrong questions, and confidently hold views that the majority of experts disagree with.
Elementary mistakes include: getting the definitions of key terms in machine learning wrong; not realizing that Waymos can only drive with remote assistance.
Asking the wrong questions includes: failing to critically appraise whether performance on benchmark tasks actually translates into real world capabilities on tasks in the same domain (i.e. does the benchmark have measurement validity if its intended use is to measure general intelligence, or human-like intelligence, or even just real world performance or competence?); failing to wonder what (some, many, most) experts say are the specific obstacles to AGI.
Confidently holding views that the majority of experts disagree with includes: there is widespread, extreme confidence about LLMs scaling to AGI, but a survey of AI experts earlier this year found that 76% think current AI techniques are unlikely or very unlikely to scale to AGI.
The situation with AI is more forgivable than with covid because there's no CDC or SAGE for AGI. There's no research literature — or barely any, especially compared to any established field — and there are no textbooks. But, still, there is general critical thinking and skepticism that can be applied with AGI. There are commonsense techniques and methods to understanding the issue better.
In a sense, I think the best analogy for assessing the plausibility of claims about near-term AGI is investigating claims that someone possesses a supernatural power like psychics who claim to be able to solve crimes via extrasensory perception, or investigating an account of a religious miracle, like a holy relic healing someone's disease or injury. Or maybe an even better analogy is assessing the plausibility of the hypothesis that a near-Earth object is alien technology, as I discussed here. There is no science of extrasensory perception, or religious miracles. There is no science of alien technology (besides maybe a few highly speculative papers). Yet there are general principles of scientific epistemology, scientific skepticism, and critical thinking we can apply to these questions.
I resonate completely with Gregory Lewis' dismay at how differently the EA community does research or forms opinions today than how GiveWell evaluates charities. I feel, like Lewis seems to, that the way it is now is a betrayal of EA's founding values.
AGI is an easy topic for me to point out the mistakes in a clear way. A similar conversation could be had about longtermism, in terms of applying general critical thinking and skepticism. To wit: if longtermism is supposedly the most important thing in the world, then, after eight years of developing the idea, after multiple books published and many more papers, why haven't we seen a single promising longtermist intervention yet, other than those that long predate the term "longtermism"? (More on this here.)
Other overconfident, iconoclastic opinions in the EA community are less prominent, but you'll often see people who think they can outsmart decades of expert study of an issue with a little effort in their spare time. You see opinions of this sort in areas including policy, science, philosophy, and finance/economics. It would be impossible for me to know enough about all these topics to point out the elementary mistakes that are probably made in all of them. But even in a few cases where I know just a little, or suddenly feel curious and care to Google, I have been able to notice some elementary mistakes.
To distill the lesson of the covid-19 example into two parts:
- In cases where there is an established science or academic field or mainstream expert community, the default stance of people in EA should be nearly complete deference to expert opinion, with deference moderately decreasing only when people become properly educated (i.e., via formal education or a process approximating formal education) or credentialed in a subject.
- In cases where there is no established science or academic field or mainstream expert community, such as AGI, longtermism, or alien technology, the appropriate approach is scientific skepticism, epistemic caution, uncertainty, common sense, and critical thinking.
- ^
My best attempt so far was my post "Disciplined iconoclasm".

FWIW, it's unclear to me how persuasive COVID-19 is as a motivating case for epistemic modesty. I can also recall plenty of egregious misses from public health/epi land, and I expect re-reading the podcast transcript would remind me of some of my own.
On the other hand, the bar would be fairly high: I am pretty sure both EA land and rationalist land had edge over the general population re. COVID. Yet the main battle would be over whether they had 'edge generally' over 'consensus/august authorities'.
Adjudicating this seems murky, with many 'pick and choose' factors ('reasonable justification for me, desperate revisionist cope for thee', etc.) if you have a favoured team you want to win. To skim a few:
For better or worse, I still agree with my piece on epistemic modesty, although perhaps I find myself an increasing minority amongst my peers.
On the other hand, many early critiques of GiveWell were basically "Who are you, with no background in global development or in traditional philanthropy, to think you can provide good charity evaluations?"
That seems like a perfectly reasonable, fair challenge to put to GiveWell. That’s the right question for people to ask! My impression is that GiveWell, starting early on and continuing today, has acted with humility and put in the grinding hard work to build credibility over time.
I don’t think GiveWell would have been a meaningfully useful project if a few people just spent a bit of their spare time over a few weeks to produce the research and recommendations. (Yet that seems sufficient in some cases, such as covid-19, for people in EA to decide to overrule expert opinion.)
It could have been different if there were already a whole field or expert community doing GiveWell-style cost-effectiveness evaluations for global health charities. Part of what GiveWell did was identify a gap in the “market” (so to speak) and fill it. They weren’t just replicating the effort of experts.
[Edited on Dec. 16 at 8:50 PM Eastern to add: to clarify, I mean the job GiveWell did was more akin to science journalism or science communication than original scientific research. They were building off of expert consensus rather than challenging it.]
As I recall, GiveWell initially was a project the founders tried to do in their spare time, and then quickly realized was such a big task it would have to be a full-time job. I also hazily recall them doing a lot of work to get up to speed, and also that they had, early on, (and still have) a good process for taking corrections from people outside the organization.
I think as a non-expert waltzing into an established field, you deserve the skepticism and the challenges you will initially get. That is something for you to overcome, that you should welcome as a test. If that is too hard, then the project is too hard.
After all, this is not about status, esteem, ego, or pride, right? It’s about doing good work, and about doing right by the aid recipients or the ultimate beneficiaries of the work. It's not about being right, it's about getting it right.
Based on my observations and interactions, EA had much more epistemic modesty and much less of a messiah complex in the early-to-mid-2010s, although I saw a genuinely scary and disturbing sign of a messiah complex from a LEAN employee/volunteer as early as 2016, who mused on a Skype call about the possibility of EA solving the world's problems in priority sequence. (That person, incidentally, also explicitly encouraged me and my fellow university EA group organizer in the Skype meeting to emotionally manipulate students into becoming dedicated effective altruists in a way that I took to be plainly unethical and repugnant. So, overall, a demented conversation.)
The message and tone in the early-to-mid-2010s was more along the lines of: 'we have these ideas about how our obligation to give to charity and about charity cost-effectiveness that might sound radical and counterintuitive, but we want to have a careful, reasonable conversation about it and see if we can convince people we're on the right track'. Whereas by now, in the mid-2020s, the message and tone feels more like: 'the EA community has superior rationality to 99.9% of people on Earth, including experts and academics in fields we just started thinking about 2 weeks ago, and it's up to us to save the world — no, the lightcone!'. I can see how the latter would be narratively and emotionally compelling to some people, but it's also extremely off-putting to many other people (like me, perhaps also to Gregory Lewis and other people he knows with backgrounds in public health/medicine/epidemiology/virology/etc., although I don't want to speak for him or the people he knows).
In fact, they were largely building off the efforts of recognized domain experts. See, e.g., this bibliography from 2010 of sources used in the "initial formation of [its] list of priority programs in international aid," and this 2009 analysis of bednet programs.
Yeah but didn’t it turn out that Gregory Lewis was basically wrong about masks, and the autodidacts he was complaining about were basically right? Am I crazy? How are you using that quote, of all things, as an example illustrating your thesis??
More detail: I think a normal person listening to the Gregory Lewis excerpt above would walk away with an impression: “it’s maybe barely grudgingly a good idea for me to use a medical mask, and probably a bad idea for me to use a cloth mask”. That’s the vibe that Lewis is giving. And if this person trusted Lewis, then they would have made worse decisions, and been likelier to catch COVID, then if they had instead listened to the people Lewis was criticizing. Because current consensus is: medical masks are super duper obviously good for not catching COVID, and cloth masks are not as good as medical masks but clearly better than nothing.
This was interesting to read! I don't necessarily think the points that Greg Lewis pointed out are that big of a deal because while it can sometimes be embarrassing to discuss and investigate things as non-experts, there are also benefits that can come from it. Especially when the experts seem to be slow or under political constraints or sometimes just wrong in the case of individual experts. But I agree that EA can fall into a pattern where interested amateurs discuss technical topics with the ambition (and confidence?) of domain experts -- without enough people in the room noticing that they might be out of their depth and missing subtle but important things.
Some comments on the UK government's early reaction to Covid:
Even if we assume that it wasn't possible for non-experts to do better than SAGE, I'd say it was still reasonable for people to have been worried that the government was not on top of things. The recent Covid inquiry lays out that SAGE was only used to assess the consequences of policies that the politicians presented before them; lockdown wasn't deemed politically feasible (without much thought -- it basically just wasn't seriously considered until very late). This led to government communications doing this weird dance where they tried to keep the public calm and speak about herd immunity and lowering the peak, but their measures and expectations did not match the reality of the situation.
Not to mention that when it came to the second lockdown later in 2020, by that point Boris Johnson was listening to epidemiologists who were just outright wrong. (Sunetra Gupta had this model that herd immunity had already been reached because there was this "iceberg" of not-yet-seen infections.) It's unclear how much similar issues were already a factor in February/March of 2020. (I feel like I vaguely remember a government source mentioning vast numbers of asymptomatic infections before the first lockdown, but I just asked Claude about summarizing the inquiry findings on this, and Claude didn't find anything that would point to this having been a factor. So, maybe I misremembered or maybe the government person did say that in one press interview as a possibility, but then it wasn't a decisive factor in policy decisions and SAGE itself obviously never took this seriously because it could be ruled out early on.)
So, my point is that you can hardly blame EAs for not leaving things up to the experts if the "experts" include people who even in autumn of 2020 thought that herd immunity had already been reached, and if the Prime Minister picks them to listen to rather than SAGE.
Lastly, I think Gregory Lewis was at risk of being overconfident about the relevance of expert training or "being an expert" when he said that EAs who were right about the government U-turn about lockdowns were right only in the sense of a broken clock. I was one of several EAs who loudly and clearly said "the government is wrong about this!." I even asked in an EA Covid group if we should be trying to get the attention of people in government about it. This might have been like 1-2 days before they did the U-turn. How would Greg Lewis know that I (and other non-experts like me -- I wasn't the only one who felt confident that the government was wrong about something right before March 16th) had not done sound steps of reasoning at the time?
I'm not sure myself; I admittedly remember having some weirdly overconfident adjacent beliefs at the time, not about the infection fatality rate [I think I was always really good at forecasting that -- you can go through my Metaculus commenting history here], but about what the government experts were basing their estimates on. I for some reason thought it was reasonably plausible that the government experts were making a particular, specific mistake about interpreting the findings from the Cruise ship cases, but I didn't have much evidence of them making that specific mistake [other than them mentioning the Cruise ship in connection with estimating a specific number], nor would it even make sense for government experts to stake a lot of their credence in just one single data point [because neither did I]. So, me thinking I know that they were making a specific mistake, as opposed to just being wrong for reasons that must be obscure to me, seems like pretty bad epistemics. But anyway, other than that, I feel like my comments from early March 2020 aged remarkably well and I could imagine that people don't appreciate how much you will know and understand about a subject if you follow it obsessively with all your attention every single day. And it doesn't take genius statistics skill to piece together infection fatality estimates and hospitalization estimates from different outbreaks around the world. Just using common sense and trying to adjust for age stratification effects with very crude math, and reasoning about where countries do good or bad testing (like, reading about the testing in Korea, it became clear to me that they probably were not missing tons of cases, which was very relevant in ruling out some hypothesis about vast amounts of asymptomatic infections), etc. This stuff was not rocket science.
For balance, the established authorities' early beliefs and practices about COVID did not age well. Some of that can be attributed to governments doing government things, like downplaying the effectiveness of masks to mitigate supply issues. But, for instance, the WHO fundamentally missed on its understanding of how COVID is transmitted . . . for many months. So we were told to wash our groceries, a distraction from things that would have made a difference. Early treatment approaches (e.g., being too quick to put people on vents) were not great either.
The linked article shows that some relevant experts had a correct understanding early on but struggled to get acceptance. "Dogmatic bias is certainly a big part of it,” one of them told Nature later on. So I don't think the COVID story would present a good case for why EA should defer to the consensus view of experts. Perhaps it presents a good case for why EA should be very cautious about endorsing things that almost no relevant expert believes, but that is a more modest conclusion.
Is this a response to what Gregory Lewis said? I don’t think I understand.
Maybe this is subtle/complicated… Are the examples you’re citing the actual consensus views of experts? Or are they examples of governments and institutions like the World Health Organization (WHO) misunderstanding what the expert conensus and/or misrepresenting the expert consensus to the public?
This excerpt from the Nature article you cited makes it sound like the latter:
Did random members of the EA community — as bright and eager as they might be — with no prior education, training, or experience with relevant fields like public health, epidemiology, virology, and medicine outsmart the majority of relevant experts on this question (airborne vs. not) or any others, not through sheer luck or chance, but by actually doing better research? This is a big claim, and if it is to be believed, it needs strong evidentiary support.
In this Nature article, there is an allegation that the WHO wasn’t sufficiently epistemically modest or deferential to the appropriate class of experts:
And there is an allegation around communications, rather than around the science itself:
So, let’s say there’s another pandemic. Which is the better strategy?
Strategy A: Read forum posts and blog posts by people in the EA community doing original research and opining on epidemiology, virology, and public health who have never so much cracked open a relevant textbook.
Strategy B: Survey sources of expert opinion, including publications like Nature, open letters written on behalf of expert communities, statements by academic and scientific organizations and so on, to determine if a particular institution like the WHO is accurately communicating the majority view of experts, or if they’re a weird outlier adhering to a minority view, or just communicating the science badly.
I would say the Nature article is support for Strategy B and not at all support for Strategy A.
You could even interpret it as evidence against Strategy A. If you believe the criticism in Nature is right, even experts in an adjacent field or subfield, who have prestigious credentials like advising the WHO, can get things catastrophically wrong by being insufficiently epistemically modest and not deferring enough to the experts who know the most about a subject, and who have done the most research on it. If this is true, then that should make you even more skeptical about how reliable the research and recommendations will be from a non-expert blogger with no relevant education who only started learning about viruses and pandemics for the first time a few weeks ago.
What is described in the Nature piece sound like incredibly subtle mistakes (if we can for sure call them mistakes at this point). Lewis’ critique of the EA community is that it was making incredibly obvious, elementary mistakes. So, why think the EA community can outperform experts on avoiding subtle mistakes if it can’t even avoid the obvious mistakes?
One of the recent examples of hubris I saw on the EA Forum was someone asserting that they (or the EA community at large) could resolve, within the next few months/years, the fundamental uncertainty around the philosophical assumptions that go into cost-effectiveness estimates comparing shrimp welfare and human welfare. Out of the topics I know well, the philosophy of consciousness might be #1, or at least near the top. I don’t know how to convey the level of hubris that comment betrays.
It would be akin to saying that a few people in EA, with no training in physics, could figure out, in a few months or years, the correct theory of quantum gravity and reconcile general relativity and quantum mechanics. Or that a few people in EA, with no training in biology or medicine, would be able to cure cancer within a few months or years. Or that, with no background in finance, business, or economics, that they’d be able to launch an investment fund that consistently, sustainably achieves more alpha than the world’s top-performing investment funds every year. Or that, with no background in engineering or science, they’d be able to beat NASA and SpaceX in sending humans to Mars.
In other words, this is a level of hubris that’s unfathomable, and just not a serious or credible way to look at the world or your own abilities, in the absence of strong, clear evidence that you possess even average abilities relative to the relevant expert class.
I don’t know the first thing about epidemiology, virology, public health, or medicine. So, I can’t independently evaluate how appropriate or correct it is/was for Gregory Lewis to be so aggravated by the EA community’s initial response to covid-19 that he considered distancing himself from the movement. I can believe that Lewis might be correct because a) he has the credentials, b) the way he’s describing it is how it essentially always turns out when non-experts think they can outsmart experts in a scientific or medical field without first becoming experts themselves, and c) in areas where I do know enough to independently evaluate the plausibility of assertions made by people in the EA community on the object level, I feel as infuriated and incredulous as Lewis described feeling in that 80,000 Hours interview.
I see these sort of wildly overconfident claims about being able to breezily outsmart experts on difficult scientific, philosophical, or technical problems as moderately, but not dramatically, more credible than the people talking about UFOs or ESP or whatever. (Which apparently is not that rare.)
I see a general rhetorical or discursive strategy employed across many people with fringe views, be they around pseudoscience, fake medicine, or conspiracy theories. First, identify some scandal or blunder or internecine conflict within some scientific expert community. Second, say, “Aha! They’re not so smart, after all!” Third, use this as support for whatever half-cocked pet theory you came up with. This is obviously a logically invalid argument, as in, obviously the conclusion does not logically follow from the premises. The standard for scientific experts should not be perfection; the standard for amateurs, dilettantes, and non-expert iconoclasts should be showing they can objectively do better than the average expert — not on a single coin flip, but on an objective, unbiased measure of overall performance.
There is a long history in the LessWrong community of opposition to institutional science, with the typical amount of intellectual failure that usually comes with opposition to institutional science. There is a long history of hyperconfidently scorning expert consensus and being dead wrong. Obviously, there is significant overlap between the LessWrong community and the EA community, and significant influence by the former on the latter. What I fear is that this anti-scientific attitude and undisciplined iconoclasm has become a mainstream, everyday part of the EA community, in a way that was not true, or at least not nearly as true, in my experience, in the early-to-mid-2010s.
The obvious rejoinder is: if you really can objectively outperform experts in any field you care to try your hand at for a few weeks, go make billions of dollars right now, or do any other sort of objectively impressive thing that would provide evidence for the idea that you have the abilities you think you do. Surely, within a few months or years of effort, you would have something to show for it. LessWrong has been around for a long time, EA has been around for a long time. There’s been plenty of time. What’s the excuse for why people haven’t done this yet?
And, based on base rates, what would you say is more likely: people being misunderstood iconoclastic self-taught geniuses who are on the cusp of greatness or people just being overly confident based on a lack of experience and a lack of understanding of the problem space?
I agree with the main takeaway of this post - settled scientific consensus should generally be deferred to, and would add on that from a community perspective more effort needs to be put into EAs that can do so doing science communication, or science communicators posting on e.g. the EA Forum. I will have a think about if I can do anything here, or in my in-person group organising.
But I think some of this post is just describing personality traits of the kind of people who come to EA, and saying they're bad personality traits. And I'm not really sure what you want to be done about that.
Huh? I think everything mentioned in this post (either by me or by Gregory Lewis) are behaviours that people can consciously choose to engage in or not. I don’t think any of them are personality traits.
What do you think is an example of a personality trait mentioned in this post?
I'll give a couple of examples:
I mean, isn't the idea that they might be able to contribute something meaningful, and doing so is both low-effort and very good if it works, so worth the shot?
If EA deferred to the moral consensus, it would cease to exist, because the moral consensus is that altruism should be relational and you have no obligation to do anything else. People who have tendencies to defer to the moral consensus don't join up with EA.
-
Again, not that these are optimal, but they basically seem to me to be either pretty stable individual difference things about people or related to a person's age (young adult, usually). It would be great to have more older adults on-board with the core idea of doing the most good possible being a tool of achieving self-actualisation, as well as more acceptance of this core idea among mainstream society. I hope we will get there.
I don’t think any of these are personality traits. These are ideas or strategies that people can discuss and decide whether they’re wise or unwise. You could, conceivably, have a discussion about one or more of these, become convinced that the way you’ve been doing things is unwise, and then change your behaviour subsequently. I wouldn’t call that "changing your personality". I don't see why these would be stable traits, as opposed to things that people can change by thinking about it and deciding to act differently.
I think there might be serious problems with the ideas or strategies that you described, if those were the ideas or strategies at play in EA. But my feeling is you gave a bit of a watered-down, euphemistic retelling of the ideas and strategies than what I tend to see people in EA actually act on, or what they tend to say they believe.
For instance, on covid-19, it seems like some people in EA still think (as evidenced by the comments on this post) that they actually repeatedly outsmarted the expert/scientific communities on the relevant public health questions — not just by chance or luck, in a "broken clock is right twice a day" or "calling a coin flip" way, but by general superior rationality/epistemology — rather than following a much more epistemically modest, cautious rationale of we "might be able to contribute something meaningful, and doing so is both low-effort and very good if it works, so worth the shot".
I don't buy that thinking this way is a stable personality trait that is beyond your power to change as opposed to something that you can be talked out of.
It seems weird to call any of these things personality traits. Is being an act consequentialist as opposed to a rule consequentialist a personality trait? Obviously not, right? It seems equally obvious to me that what we're talking about here are not personality traits. It seems equally weird to me to call them personality traits as it would be to call subscribing to rule consequentialism a personality trait.