Yeah, the view that utilities aren't comparable has more legs on preference-satisfactionism than it does on hedonism. On the face it is quite weird to say that the utilities, in the hedonistic sense, are not comparable. Can we compare the utility of a man being tortured with that of a man enjoying watching The Sopranos? DALYs, QALYs and WELLBYs are utility metrics that make utility comparable across people.
I would be interested to see any evidence on whether citizen knowledge has increased or not since social media formed. People often assert this but don't argue for it and the long-term trend isn't that clear.
This is a good question. In the absence of other explanations of market failure, this is an update away from the view that direct anti-poverty interventions do have high ROI. While some RCTs of anti-poverty programmes like the Graduation approach might show a big ROI, perhaps the market actors know not to trust these RCTs. Maybe they have an implicit view on the external validity of these studies, which has been demonstrated to be low
This is true, but they do contain low bar elements, such as $1.90 per day poverty. He also clearly thinks they are a bad way to think about development. I think it would be better if economists and EAs focused on an expanded GDP metric that includes income growth as well as other important contributors to wellbeing
2) I don't think this refutes Johannes point, which is that the headline figures claimed in the write-up on impact lab seem selected to get eye-catching figures. Although they run RCP4.5, they report the effects of RCP8.5 on the website and in the abstract. The mean effect is about a sixth smaller on RCP 4.5.
To put RCP8.5 in context, energy demand nearly quadruples, driven mainly by coal.
I do worry that this sort of work underestimates our ability to adapt. If energy demand does quadruple, there would be a lot more air conditioning to go round, and burning of coal would have driven a lot of income growth
3) From the copy I see, I think you are reporting Figure 7a, not 9a?
The SDGs seem to me to be antithetical to effective altruism. The SDGs :
EAs should be focused on the question of how governments can most cost-effectively increase social welfare (broadly conceived) in their own countries. If we do this, we will meet all of the arbitrary "low bar" goals anyway. For discussion of national development vs kinky development, see some of Lant Pritchett's blogs.
I tend to agree. This feels a bit like a "be the change you want to see in the world" thing. Ordinary communication norms would push us towards just using verbal claims like 'likely' but for the reasons you mention, I pretty strongly think we should quantify and accept any short-term weirdness hit.
I suppose they're roughly in line with my previous best guess. On the basis of the Annan and Hargreaves paper, on median BAU scenario the chance of >6K was about 1%. I think this is probably a bit too low because the estimates that ground that were not meant to systematically sample uncertainty about ECS. On the WCRS estimate, the chance of >6K is about 5%. (Annan and Hargreaves are co-authors on WCRS, so they have also updated).
One has to take account of uncertainty about emissions scenarios as well
Thanks for posting this. This does seem to correct a lot of the common stated problems with estimates of S by incorporating all the lines of evidence. It'll be interesting to see how this is received in AR6.
I have updated the guesstimate model from my How Hot Will it Get piece to reflect the findings here. The scenarios are labelled as the WCRP estimates of climate sensitivity.
Overall, I don't view this as especially good news.
From what I have read of their assumptions, the Baseline case seems more plausible, e.g. the robustness checks include an assumption of a uniform prior over S, which seems wrong. This suggests that the high chance of extreme warming suggested by Wagner and Weitzman is less likely. Still, the chance of >6K is way too high.
Defining the line of what counts as severe injustice, a high stakes error or a violation of a basic right, is not done precisely in the literature and is in my view impossible to do in a theoretically satisfying way. I think this is true for all nonconsequentialist thresholds. The point of nonconsequentialism is to avoid having to say how good something is, which makes it difficult/impossible to know how to trade-off different nonconsequentialist elements against each other. What do I do if I have to choose between the right to free speech and the economic minimum? If I don't know how good these things are, I don't see how I can compare and weigh them. Equally, what do I do if I have a 10% chance of violating someone's right to free association and a 20% chance of violating someone's right to an economic minimum? If you don't know how good these outcomes are, probability weighing them isn't much use when you're deciding how to act.
Ultimately, the boundary of what counts as a high stakes error is defined fuzzily and arbitrarily.