Pronouns: she/her or they/them.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.
I mentioned that you often see journalists or other people not intimately acquainted with effective altruism conflate ideas like longtermism and transhumanism (or related ideas about futuristic technologies). This is a forgivable mistake because people in effective altruism often conflate them too.
If you think superhuman AGI is 90% likely within 30 years, or whatever, then obviously that will impact everyone alive on Earth today who is lucky (or unlucky) enough to live until it arrives, plus all the children who will be born between now and then. Longtermists might think the moral value of the far future makes this even more important. But, in practice, it seems like people who aren't longtermists who also think superhuman AGI is 90% likely within 30 years are equally concerned about the AI thing. So, is that really longtermist?
Whether society ends up spending, in the end, more money on asteroid defense or, possibly, more money on monitoring large volcanoes, is orders of magnitude more important than whether people in the EA community (or outside of it) understand the intellectual lineage of these ideas and how novel or non-novel they are. I don't know if that's exactly what you were saying, but I'm happy to concede that point anyway.
To be clear, NASA's NEO Surveyor mission is one of the things I'm most excited about in the world. It makes me feel so happy thinking about it. And exposure to Bostrom's arguments from the early 2000s to the early 2010s is a major part of what convinced me that we, as a society, were underrating low-probability, high-impact risks. (The Canadian journalist Dan Gardner's book Risk also helped convince me of that, as did other people I'm probably forgetting right now.)
Even so, I still think it's important to point out ideas are not novel or not that novel if they aren't, for all the sorts of reasons you would normally give to sweat the small stuff, and not let something slide that, on its own, seems like an error or a bit of a problem, just because it might plausibly benefit the world in some way. It's a slippery slope, for one...
I may not have made this clear enough in the post, but I completely agree that if, for example, asteroid defense is not a novel idea, but a novel idea, X, tells you that you should spend 2x more money on asteroid defense, then spending 2x more on asteroid defense counts as a novel X-ist intervention. That's an important point, I'm glad you made it, and I probably wasn't clear enough about it.
However, I am making the case that all the compelling arguments to do anything differently, including spend more on asteroid defense, or re-prioritize different interventions, were already made long before "longtermism" was coined.
If you want to argue that "longtermism" was a successful re-branding of "existential risk", with some mistakes thrown it, I'm happy to concede that. But then I would ask: is everyone aware it's just a re-branding? Is there truth in advertising here?
I agree that the scholarship of Bostrom and others starting in the 2000s on existential risk and global catastrophic risk, particularly taking into account the moral value of the far future, does seem novel, and does also seem actionable and important, in that it might, for example, make us re-do a back-of-the-envelope calculation on the expected value of money spent on asteroid defense and motivate us to spend 2x more (or something like that).
As someone who was paying attention to this scholarship long before anyone was talking about "longtermism", I was pretty disappointed when I found out "longtermism" was just a recapitulation of that older scholarship, plus a grab bag of other stuff that was really unconvincing, or stuff that societies had already been doing for generations, or stuff that just didn't make sense.
If you're saying that longtermism is not a novel idea, then I think we might agree.
Everything is relative to expectations. I tried to make that clear in the post, but let me try again. I think if something is pitched as a new idea, then it should be a new idea. If it's not a new idea, that should be made more clear. The kind of talk and activity I've observed around "longtermism" is incongruent with the notion that it's an idea that's at least decades and quite possibly many centuries old, about which much, if not most, if not all, the low-hanging fruit has already been plucked — if not in practice, than at least in research.
For instance, if you held that notion, you would probably not think the amount of resources — time, attention, money, etc. — that was reallocated around "longtermism" roughly in the 2017-2025 period would be justified, nor would the rhetoric around "longtermism" be justified.
You can find places where Will MacAskill says that longtermism is not a new idea, and references things like Nick Bostrom's previous work, the Long Now Foundation, and the Seventh Generation philosophy. That's all fine and good. But What We Owe The Future and MacAskill's discussions of it, like on the 80,000 Hours Podcast, don't come across to me as a recapitulation of a decades-old or centuries-old idea. I also don't think the effective altruism community's energy around "longtermism" would have been what it's been if they genuinely saw longtermism as non-novel.
For example, MacAskill defines longtermism as "the idea that positively influencing the long-term future is a key moral priority of our time." Why our time? Why not also the time of the founders of Oxford University 929 years ago or whenever it was? Sure, there's the time of perils argument, but, objections to the time of perils argument aside, why would a time of perils-esque argument also apply to all the non-existential risk-related things like economic growth, making moral progress, and so on?
Thanks, that's very helpful.
I'm curious why you say that about the accuracy/performance of the conclusions of the EA community with regard to covid. Are you saying it's just overly complicated and messy to evaluate these conclusions now, even to your own satisfaction? Or you do personally have a sense of how good/bad overall the conclusions were, you just don't think you could convince people in EA of your sense of things?
The comparison that comes to mind for me is how amateur investors (including those who don't know the first thing about investing, how companies are valued, GAAP accounting, and so on) always seem to think they're doing a great job. Part of this is they typically don't even benchmark their performance against market indexes like the S&P 500. Or, if they do, they do it in a really biased, non-rigorous way, e.g. oh, my portfolio of 3 stocks went up a lot recently, let me compare it to the S&P 500 year-to-date now. So, they're not even measuring their performance properly in the first place, yet they seem to believe this is a great idea and they're doing a great job anyway.
Studies of even professional investors find it's rare for an investor to beat the market over a 5-year period, and even rarer for an investor who beats the market in a 5-year period to beat the market again in the next 5-year period. There actually seems to be surprisingly weak correlation between beating the market in one period to the next. Using your coin flip analogy, if every stock trade is a bet on a roughly 50/50 proposition, i.e., "this stock will beat the market" or "this stock won't beat the market", then you need a large sample size of trades to rule out the influence of chance. It's so easy for amateurs to cherry-pick trades, prematurely declare victory (e.g. say they beat the market the moment a stock goes up a lot, rather than waiting until the end of the quarter or the end of the year), become overconfident on too small a number of trades (e.g. just bought Apple stock), or not even benchmark their performance against the market at all.
Seeing these irrationalities so often and so viscerally, and even seeing how hard it is to talk people out of them even when you can show them the research and expert opinion, or explain these concepts, I'm extremely skeptical of people who just an intuitive, gut feeling that they've outperformed experts on making calls or predictions with a statistically significant sample size of calls, in the absence of any kind of objective accounting of their performance. It just seems too tempting, feels too good, to feel like one is winning, to take a moment of sober second thought and double-check that feeling against an objective measure (in the case of stocks, checking a market index), wonder if you can rule out luck (e.g. just buying Apple and that's it), and wonder if you can rule out bias in your assessment of performance (e.g., checking the S&P 500 when your favourite stock has just gone up a lot).
If the process was as bad as you say, as in, people who have done a few weeks of reading on the relevant science and medicine making elementary mistakes, then I'm very skeptical of the amount of psychological bias involved in people recalling and subjectively assessing their own track record, or any sense of confidence they have about that. It seems like if we don't need people who understand science and medicine to do science and medicine properly, then a lot of our education system and scientific and medical institutions are a waste. Given that it's just so commonsense that understanding a subject better should lead you to make better calls on that subject — overall, over the long term, statistically — we should not violate common sense on the basis of a few amateurs guessing a few coin flips better than experts, and we should especially not violate common sense when we can't even confirm whether that actually happened.
If you took this seriously, in 2011 you'd have had no basis to trust GiveWell (quite new to charity evaluation, not strongly connected to the field, no credentials) over Charity Navigator (10 years of existence, considered mainstream experts, CEO with 30 years of experience in charity sector).
Well, no. Because I did hold that view very seriously (as I still do) in the late 2000s and early 2010s, and I came to trust GiveWell.
Charity Navigator doesn't even claim to evaluate cost-effectiveness; they don't do cost-effectiveness estimates.
Even prior to GiveWell, there were similar ideas kicking around. A clunky early term that was used was 'philanthrocapitalism' (which is a mouthful and also ambiguous). It meant that charities should seek an ROI in terms of impact like businesses do in terms of profit.
Back in the day, I read the development economist William Easterly's blog Aid Watch (a project of NYU's Development Research Institute) and he called it something like the smart aid movement, or the smart giving movement.
The old blog is still there in the Wayback Machine, but the Wayback Machine doesn't allow for keyword search, so it's hard to track down specific posts.
I had forgotten until I just went spelunking in the archive that William Easterly and Peter Singer had a debate in 2009 about global poverty, foreign aid, and charity effectiveness. The blog post summary says that even though it was a debate and they disagreed on things, they agreed on recommendations to donate to some specific charities.
My point here is that charity effectiveness had been a public conversation involving aid experts like Easterly going back a long time. You never would have taken away from this public conversation that you should pay attention to something like Charity Navigator rather than something like GiveWell.
In the late 2000s and early 2010s, what international development experts would have told you to look at Charity Navigator?
This feels like a Motte ("skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study") and Bailey (almost complete deference with deference only decreasing with formal education or credentials). GiveWell obviously never claimed to be experts in much beyond GHW charity evaluation.
I might have done a poor job getting across what I'm trying to say. Let me try again.
What I mean is that, in order for a person or a group of people to avoid deferring to experts in a field, they would have to be competent at assessing research in that field. And maybe they are for one or a few fields, but not all fields. So, at some point, they have to defer to experts on some things — on many things, actually.
What I said about this wasn't intended as a commentary on GiveWell — sorry for the confusion. I think GiveWell's approach was sensible. They realized that competently assessing the relevant research on global poverty/global health would be a full-time job, and they would need to learn a lot, and get a lot of input from experts — and still probably make some big mistakes. I think that's an admirable approach, and the right way to do it.
I think this is quite different from spending a few weeks researching covid and trying to second-guess expert communities, rather than just trying to find out what the consensus views among expert communities are. If some people in EA had decided in, say, 2018 to start focusing full-time on epidemiology and public health, and then started weighing in on covid-19 in 2020 — while actively seeking input from experts — that would have been closer to the GiveWell approach.
This sounds like outcome bias to me, i.e., believing in retrospect that a decision was the right one because it happened to turn out well. For example, if you decide to drive home drunk and don't crash your car, you could believe based on outcome bias that that was the right decision.
There may also be some hindsight bias going on, where in retrospect it's easier to claim that something was absolutely, obviously the right call, when, in fact, at the time, based on the available evidence, the optimally rational response might have been to feel a significant amount of uncertainty.
I don't know if you're right that someone taking their advice from Gregory Lewis in this interview would have put themselves more at risk. Lewis said it was highly uncertain (as of mid-April 2020) whether medical masks were a good idea for the general population, but to the extent he had an opinion on it at the time, he was more in favour than against. He said it was highly uncertain what the effect of cloth masks would ultimately turn out to be, but highlighted the ambiguity of the research at the time. There was a randomized controlled trial that found cloth masks did worse than the control, but many people in the control group were most likely wearing medical masks. So, it's unclear.
The point he was making with the cloth masks example, as I took it, was simply that although he didn't know how the research was ultimately going to turn out, people in EA were missing stuff that experts knew about and that was stated in the research literature. So, rather than engaging with what the research literature said and deciding based on that information, people in EA were drawing conclusions from less complete information.
I don't know what Lewis' actual practical recommendations were at the time, or if he gave any publicly. It would be perfectly consistent to say, for example, that you should wear medical masks as a precaution if you have to be around people and to say that the evidence isn't clear yet (as of April 2020) that medical masks are helpful in the general population. As Lewis noted in the interview, the problem isn't with the masks themselves, it's that medical professionals know how to use them properly and the general population doesn't. So, how does that cash out into advice?
You could decide: ah, why bother? I don't even know if masks do anything or not. Or you could think: oh, I guess I should really make sure I'm wearing my mask right. What am I supposed to do...?
Similarly, with cloth masks, when it became better-known how much worse cloth masks were than medical masks, everyone stopped wearing cloth masks and started wearing KN95, N95 masks, or similar. If someone's takeaway from that April 2020 interview with Lewis was that the efficacy of cloth masks was unclear and there's a possibility they might even turn out to be net harmful, but that medical masks work really well if you use them properly, again, they could decide at least two different things. They could think: oh, why bother wearing any mask if my cloth mask might even do more harm than good? Or they could think: wow, medical masks are so much better than cloth masks, I really should be wearing those instead of cloth masks.
Conversely, people in EA promoting cloth masks might have done more harm than good depending on, well, first of all, if anyone listened to that advice in the first place, but, second, on whether any people who did listen decided to go for cloth masks rather than no mask, or decided to wear cloth masks instead of medical masks.
Personally, my hunch is that if very early on in the pandemic (like March or April 2020), there had been less promotion of cloth masks, and if more people had been told the evidence looked much better for medical masks than for cloth masks (slight positive overall story for medical masks, ambiguous evidence for cloth masks, with a possibility of them even making things worse), then people would have really wanted to switch from cloth masks to medical masks — because this is what happened later on when the evidence came in and it was much clearer that medical masks were far superior.
The big question mark above that is I don't know/don't remember at what point the supply chain was able to provide enough medical masks for everybody.
I looked into it and found a New York Times article from April 3, 2020 that discusses a company in Chicago that had a large supply of KN95 masks, although this is still in the context of providing masks to hospitals:
One Chicago-based company, iPromo, says it has been in the KN95 importing business for a month. It had previously developed relationships with Chinese supplies for its main business, churning out custom logo-adorned promotional knickknacks like mugs, water bottles, USB flash drives and small containers of hand sanitizer.
The company’s website advertises KN95 masks at $2.96 apiece for hospitals, with delivery in five to seven days, although its minimum order is 1,000 masks.
More masks are available because coronavirus transmission in China has been reduced. “They have so much stock,” said Leo Friedman, the company’s chief executive, during an interview Thursday. “They ramped up and now it’s a perfect storm of inventory.”
I found another source that says, "Millions of KN95 masks were imported between April 3 and May 7 and many are still in circulation." But this is also still in the context of hospitals, not the general public. That's in the United States.
I found a Vice article from July 13, 2020 that implies by that point it was easy for anyone to buy KN95 masks. Similarly, starting on or around June 30, 2020, there were vending machines selling KN95 masks in New York City subway stations. Although apparently KN95 masks were for sale in vending machines in New York as early as May 29.
In any case:
but the actual challenges were usually closer to a reflexive dismissal
I don't know the specific, actual criticisms of GiveWell you're referring to, so I can't comment on them — how fair or reasonable they were.
My point is more abstract: just that, in general, it is fair to be to challenge non-experts who are trying to do serious work in area outside of their expertise. It is a challenge that anyone in the position of the GiveWell founders should gladly and willingly accept, or else they're not up to the job.
Reputation, trust, and credibility in an area where you are a neophyte is not a right owed to you automatically. It's something you earn by providing evidence that you trustworthy, credible, and deserve a good reputation.
We can often just look at object-level work, study research & responses to the research, and make up our mind. Credentials are often useful to navigate this, but not always necessary.
This is hazy and general, so I don't know what you specifically mean by it. But there are all kinds of reasons that non-experts are, in general, not competent to assess the research on a topic. For example, they might be unacquainted with the nuances of statistics, experimental designs, and theories of underlying mechanisms involved in studies on a certain topic. Errors or caveats that an expert would catch might be missed by an amateur. And so on.
I am extremely skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study, since this would seem to imply that individual or group possesses preternatural abilities that just aren't realistic given what we know about human limitations. I think the sort of Tony Stark or Sherlock Holmes general-purposes geniuses of fiction are only fictional. But even if they existed, we would know who they are, and they would have a litany of objectively impressive accomplishments.