About declaring it a "pandemic," I've seen the WHO reason as follows (me paraphrasing):
«Once we call it a pandemic, some countries might throw up their hands and say "we're screwed," so we should better wait before calling it that, and instead emphasize that countries need to try harder at containment for as long as there's still a small chance that it might work.»
So overall, while the OP's premise appealing to major legal/institutional consequences of the WHO using the term "pandemic" seems false, I'm now even more convinced of the key claim I wanted to argue for: that the WHO response does not provide an argument against epistemic modesty in general, nor for the epistemic superiority of "informed amateurs" over experts on COVID-19.
Yeah, I think that's a good point.
I'm not sure I can have updates in favor or against modest epistemology because it seems to me that my true rejection is mostly "my brain can't do that." But if I could have further updates against modest epistemology, the main Covid-19-related example for me would be how long it took some countries to realize that flattening the curve instead of squishing it is going to lead to a lot more deaths and tragedy than people seem to have initially thought. I realize that it's hard to distinguish between what's actual government opinion versus what's bad journalism, but I'm pretty confident there was a time when informed amateurs could see that experts were operating under some probably false or at least dubious assumptions. (I'm happy to elaborate if anyone's interested.)
I started working on them in December. The virus infected my attention, but I'm back working on the posts now. I have two new ones fully finished. I will publish them once I have four new ones. (If anyone is particularly curious about the topic and would like to give feedback on drafts, feel free to get in touch!)
I don't remember the exact source, sorry.
FWIW I now think that warm conditions very likely do slow down transmissions by a lot. Mostly because there are many cold countries where outbreaks became uncontrollable quickly, and this happened nowhere in a hot country so far.
I just read (surprisingly to me) that Thailand ranks extremely high in pandemic preparedness and early detection. This makes me downshift the warmth hypothesis a bit.
Singapore also ranked lower on lists published in late January on "most at risk countries" compared to Japan and Korea. Thailand (first on that list) would be a better example for a warm location being hit less badly than predicted. It reported a lot of cases initially, but it indeed seems like the virus hasn't spread as much as in some other locations. Warmth could be the decisive factor, but there might also be other reasons.
Ah, my mistake – I had heard this definition before, which seems slightly different.
Probably I was wrong here. After reading this abstract, I realize that the way Norcross wrote about it is compatible with a weaker claim that linear aggregation of utility too. I think I just assumed that he must mean linear aggregation of utility, because everything else would seem weirdly arbitrary. :)
I changed it to this – curious if you still find it jarring?
Less so! The "total" still indicates the same conclusion I thought would be jumping the gun a bit, but if that's your takeaway it's certainly fine to leave it. Personally I would just write "utilitarianism" instead of "total utilitarianism."
I'm not very familiar with the terminology here, but I remember that in this paper, Alastair Norcross used the term "thoroughgoing aggregation" for what seems to be linear addition of utilities in particular. That's what I had in mind anyway, so I'm not sure I believe anything different form you. The reason I commented above was because I don't understand the choice of "total utilitarianism" instead of just "utilitarianism." Doesn't every form of utilitarianism use linear addition of utilities in a case where population size remains fixed? But only total utilitarianism implies the repugnant conclusion. Your conclusion section IMO suggests that Harsanyi's theorem (which takes a case where population size is indeed fixed) does something to help motivate total utilitarianism over other forms of utilitarianism, such as prior-existence utilitarianism, negative utilitarianism or average utilitarianism. You already acknowledged in your reply further above to that it doesn't do much of that. That's why I suggested rephrasing your conclusion section. Alternatively, you could also explain in what ways you might think the utilitarian alternatives to total utilitarianism are contrived somehow or not in line with Harsanyi's assumptions. And probably I'm missing something about how you think about all of this, because the rest of the article seemed really excellent and clear to me. I just find the conclusion section really jarring.
I agree it doesn't say much, see e.g. Michael's comment.
In that case, it would IMO be better to change "total utilitarianism" to "utilitarianism" in the article. Utilitarianism is different from other forms of consequentialism in that it uses thoroughgoing aggregation. Isn't that what Harsanyi's theorem mainly shows? It doesn't really add any intuitions about population ethics. Mentioning the repugnant conclusion in this context feels premature.
Chomsky's universal grammar: There's not enough language data for children to learn languages in the absence of inductive biases.
I think there's more recent work in computational linguistics that challenges this. Unfortunately I can't summarize it since I only took an overview course a long time ago. I've been wondering whether I should read up on language evolution at some point. Mostly because it seems really interesting, but also because it's a field I haven't seen being discussed in EA circles, and it seems potentially useful to have this background when it comes to evaluating/interpreting AI milestones and so on. In any case, if someone understands computational linguistics, language evolution and how it relates to the nativism debate, I'd be extremely interested in a summary!
Okay, I agree that going "from perfect to flawed" isn't the core of the intuition.
Moreover, I don't think most people find the RP much less unacceptable if the initial population merely enjoys very high quality of life versus perfect satisfaction.
This seems correct to me too.
I mostly wanted to point out that I'm pretty sure that it's a strawman that the repugnant conclusion primarily targets anti-aggregationist intuitions. I suspect that people would also find the conclusion strange if it involved smaller numbers. When a family decides how many kids they have and they estimate that the average quality of life per person in the family (esp. with a lot of weights on the parents themselves) will be highest if they have two children, most people would find it strange to go for five children if that did best in terms of total welfare.