[Takeaways from Covid forecasting on Metaculus]
I’m probably going to win the first round of the Li Wenliang forecasting tournament on Metaculus, or maybe get second. (My screen name shows up in second on the leaderboard, but it’s a glitch that’s not resolved yet because one of the resolutions depends on a strongly delayed source.)
With around 52 questions, this was the largest forecasting tournament on the virus. It ran from late February until early June.
I learned a lot during the tournament. Next to claiming credit, I want to share some observations and takeaways from this forecasting experience, inspired by Linch Zhang’s forecasting AMA:
Some things I was particularly wrong about:
Some things I was particularly right about:
(I have stopped following the developments closely by now.)
I know it might not be what you're looking for, but congratulations!
+1 to the congratulations from JP! I may have mentioned this before, but I considered your forecasts and comments for covidy questions to be the highest-quality on Metaculus, especially back when we were both very active.
You may not have considered it worth your time in the end, but I still think it's good for EAs to do things that on the face of it seem fairly hard, and develop better self models and models of the world as a result.
This was a great writeup, thanks for taking the time to make it. Congrats on the contest, too!
I'm sorry to hear your experience was stressful. Do you intend to go back to Metaculus in a more relaxed way? I know some users restrict themselves to a subset of topics, for example.
Can you provide some links on the latest IFR estimates? A quick Google search leads me to the same 0.5% ballpark.
I'm not following the developments anymore. I could imagine that the IFR is now lower than it used to be in April because treatment protocols have improved.
[Is pleasure ‘good’?]
What do we mean by the claim “Pleasure is good”?
There’s an uncontroversial interpretation and a controversial one.
Vague and uncontroversial claim: When we say that pleasure is good, we mean that all else equal, pleasure is always unobjectionable, and often it is desired.
Specific and controversial claim: When we say that pleasure is good, what we mean is that, all else equal, pleasure is an end we should be striving for. This captures points like:
People who say “pleasure is good” claim that we can establish this by introspection about the nature of pleasure. I don’t see how one could establish the specific and controversial claim from mere introspection. After all, even if I personally valued pleasure in the strong sense (I don’t), I couldn’t, with my own introspection, establish that everyone does the same. People’s psychologies differ, and how pleasure is experienced in the moment doesn’t fully determine how one will relate to it. Whether one wants to dedicate one’s life (or, for altruists, at least the self-oriented portions of one's life) to pursuing pleasure depends on more than just what pleasure feels like.
Therefore, I think pleasure is only good in the weak sense. It’s not good in the strong sense.
Another argument that points to "pleasure is good" is that people and many animals are drawn to things that gives them pleasure, and that generally people communicate about their own pleasurable states as good. Given a random person off the street, I'm willing to bet that after introspection they will suggest that they value pleasure in the strong sense. So while this may not be universally accepted, I still think it could hold weight.
Also, a symmetric statement can be said regarding suffering, which I don't think you'd accept. People who say "suffering is bad" claim that we can establish this by introspection about the nature of suffering.
From reading Tranquilism, I think that you'd respond to these as saying that people confuse "pleasure is good" with an internal preference or craving for pleasure, while suffering is actually intrinsically bad. But taking an epistemically modest approach would require quite a bit of evidence for that, especially as part of the argument is that introspection may be flawed.
I'm curious as to how strongly you hold this position. (Personally, I'm totally confused here but lean toward the strong sense of pleasure is good but think that overall pleasure holds little moral weight)
Another argument that points to "pleasure is good" is that people and many animals are drawn to things that gives them pleasure
It's worth pointing out that this association isn't perfect. See  and  for some discussion. Tranquilism allows that if someone is in some moment neither drawn to (craving) (more) pleasurable experiences nor experiencing pleasure (or as much as they could be), this isn't worse than if they were experiencing (more) pleasure. If more pleasure is always better, then contentment is never good enough, but to be content is to be satisfied, to feel that it is good enough or not feel that it isn't good enough. Of course, this is in the moment, and not necessarily a reflective judgement.
I also approach pleasure vs suffering in a kind of conditional way, like an asymmetric person-affecting view, or "preference-affecting view":
I would say that something only matters if it matters (or will matter) to someone, and an absence of pleasure doesn't necessarily matter to someone who isn't experiencing pleasure, and certainly doesn't matter to someone who does not and will not exist, and so we have no inherent reason to promote pleasure. On the other hand, there's no suffering unless someone is experiencing it, and according to some definitions of suffering, it necessarily matters to the sufferer. (A bit more on this argument here, but applied to good and bad lives.)
I agree that pleasure is not intrinsically good (i.e. I also deny the strong claim). I think it's likely that experiencing the full spectrum of human emotions (happiness, sadness, anger, etc.) and facing challenges are good for personal growth and therefore improve well-being in the long run. However, I think that suffering is inherently bad, though I'm not sure what distinguishes suffering from displeasure.
[I’m an anti-realist because I think morality is underdetermined]
I often find myself explaining why anti-realism is different from nihilism / “anything goes.” I wrote lengthy posts in my sequence on moral anti-realism (2 and 3) about partly this point. However, maybe the framing “anti-realism” is needlessly confusing because some people do associate it with nihilism / “anything goes.” Perhaps the best short explanation of my perspective goes as follows:I’m happy to concede that some moral facts exist (in a comparatively weak sense), but I think morality is underdetermined.
This means that beyond the widespread agreement on some self-evident principles, expert opinions won’t converge even if we had access to a superintelligent oracle. Multiple options will be defensible, and people will gravitate to different attractors in value space.
I think if you concede that some moral facts exist, it might be more accurate to call yourself a moral realist. The indeterminacy of morality could be a fundamental feature, allowing for many more acts to be ethically permissible (or no worse than other acts) than with a linear (complete) ranking. I think consequentialists are unusually prone to try to rank outcomes linearly.
I read this recently, which describes how moral indeterminacy can be accommodated within moral realism, although it was kind of long for what it had to say. I think expert agreement (or ideal observers/judges) could converge on moral indeterminacy; they could agree that we can't know how to rank certain options and further that there's no fact of the matter.
[Are underdetermined moral values problematic?]
If I think my goals are merely uncertain, but in reality they are underdetermined and the contributions I make to shaping the future will be driven, to a large degree, by social influences, ordering effects, lock-in effects, and so on, is that a problem?
I can’t speak for others, but I’d find it weird. I want to know what I’m getting up for in the morning.
On the other hand, because it makes it easier for the community to coordinate and pull things in the same directions, there's a sense in which underdetermined values are beneficial.
[Moral uncertainty and moral realism are in tension]
Is it ever epistemically warranted to have high confidence in moral realism, and also be morally uncertain not only between minor details of a specific normative-ethical theory but between theories?
I think there's a tension there. One possible reply might be the following. Maybe we are confident in the existence of some moral facts, but multiple normative-ethical theories can accommodate them. Accordingly, we can be moral realists (because some moral facts exist) and be morally uncertain (because there are many theories to choose from that accommodate the little bits we think we know about moral reality).
However, what do we make of the possibility that moral realism could be true only in a very weak sense? For instance, maybe some moral facts exist, but most of morality is underdetermined. Similarly, maybe the true morality is some all-encompassing and complete theory, but humans might be forever epistemically closed off to it. If so, then, in practice, we could never go beyond the few moral facts we already think we know for sure.
Assuming a conception of moral realism that is action-relevant for effective altruism (e.g., because it predicts reasonable degrees of convergence among future philosophers, or makes other strong claims that EAs would be interested in), is it ever epistemically warranted to have high confidence in that, and be open-endedly morally uncertain?
Another way to ask this question: If we don't already know/see that a complete and all-encompassing theory explains many of the features related to folk discourse on morality, why would we assume that such a complete and all-encompassing theory exists in a for-us-accessible fashion? Even if there are, in some sense, "right answers" to moral questions, we need more evidence to conclude that morality is not vastly underdetermined.
For more detailed arguments on this point, see section 3 in this post.
[When thinking about what I value, should I take peer disagreement into account?]
Consider the question “What’s the best career for me?”
When we think about choosing careers, we don’t update to the career choice of the smartest person we know or the person who has thought the most about their career. Instead, we seek out people who have approached career choice with a similar overarching goal/framework (in my case, 80,000 Hours is a good fit), and we look toward the choices of people with similar personalities (in my case, I notice a stronger personality overlap with researchers than managers, operations staff, or those doing earning to give).
When it comes to thinking about one’s values, many people take peer disagreement very seriously.
I think that can be wise, but it shouldn’t be done unthinkingly. I believe that the quest to figure out one’s values shares strong similarities with the quest of figuring out one’s ideal career. Before deferring to others with one's deliberations, I recommend making sure that others are asking the same questions (not everything that comes with the label “morality” is the same) and that they are psychologically similar in the ways that seem fundamental to what you care about as a person.