I think at least Brian Tomasik cares about this.
If your suspicion is correct, then that's pretty damning for the WAW movement, unless climate change prevention is not the highest-leverage influence against WAS (a priori, it seems unlikely that climate change prevention would have the highest positive influence).
Also relevant: The Cost of Kids:
The present-value cost of having a child may be at least $300K (measured in US dollars as of roughly 2012) when both direct expenditures and opportunity costs are considered. This shows the value of using the most effective birth-control methods, like the implant and vasectomy. That said, some people may find having children very important to their wellbeing, and in such cases, having children may be worth the cost.
You're right. I had been thinking only about the mean on the distribution over discount rates, not the number of affected beings. Thanks :-)
Argument against longtermism:
Longtermism seems to rely on zero discount rates for the value of future lives. But per moral uncertainty, we probably have a probability distribution over discount rates. This probability distribution is very likely skewed towards having positive discount rates (there are much more plausible reasons why future lives are worth less than current lives, but very few (none?) why they should be more important ceteris paribus).
Therefore, expected discount rate is positive, and longtermism loses some of its bite.
Main inspiration from the chapter on practical implications of moral uncertainty from MacAskill, Bykvist & Ord 2020. I remember them discussing very similar implications, but not this one – why?
If you buy it, there is a neat continuity from the problems with current social media and AI alignment, explained in some detail in What Failure Looks Like.
It’s already much easier to pursue easy-to-measure goals, but machine learning will widen the gap by letting us try a huge number of possible strategies and search over massive spaces of possible actions. That force will combine with and amplify existing institutional and social dynamics that already favor easily-measured goals.
Just beware I got feedback by two different people that it's difficult to understand.
Greenberg 2018 lists and evaluates forecasting scoring rules. Research on additionally more complex metrics that take into account e.g.:
Might be useful to set incentives right in forecasting tournaments. Prediction markets solve the first point by logarithmic subsidising.
How does the resolution and calibration of forecasts vary by the forecasts’ “range” (e.g., whether the forecast is for an event 6 months away vs 3 years away vs 20 years away)?
I have an (unfinished) essay on the topic using Metaculus and PredictionBook data. Relation between range and accuracy is negative within forecasts on one specific questions. Specifically, linear regression is 0.0019x+0.0105 for brier score over the range in days. Of course, I'll look into better statistical analyses if I find time.
Well, there's the Ragnarök question series, which seems to fit what you're looking for.