A few years ago, some EAs dismissed interventions like political action on the grounds that they had no detailed cost-effectiveness analysis.

Section 5 of this post wonders why some of these same people are now willing to promote longtermist interventions that have no such analyses (or whose analyses are, let's say, light on the details):

Did effective altruists discover some powerful new arguments against cost-effectiveness analysis that they were previously unaware of? Did they simply re-evaluate the strength of arguments against cost-effectiveness analysis that they had previously rejected. Perhaps. It would be good to hear more about what these arguments are, and where and when they became influential. Otherwise, critics may have some grounds to suspect the explanation for effective altruists' changing attitudes towards cost-effectiveness analysis is sociological rather than philosophical.

I'm one of the flip-floppers myself, and my own best answer is that I re-evaluated the strength of the arguments. But I completely agree with Thorstad that the whole situation smells like motivated reasoning.

I remember Holden Karnofsky once explaining it differently: he started out with a narrow focus because that's what was tractable at the time, and broadened his focus when he learned more. He never insisted on cost-effectiveness analysis from a philosophical standpoint, only a practical standpoint. (Apologies if I'm mischaracterizing his views.)

Interested to hear others' thoughts.

20

0
0

Reactions

0
0
Comments7
Sorted by Click to highlight new comments since: Today at 9:40 AM

Hi smountjoy, I couldn't find the link to David Thorstad's post in this post.

Oops, thank you! I thought I had selected linkpost, but maybe I unselected without noticing. Fixed!

longtermism and politics both seem "error bars so wide that expected value theory is probably super useless or an excuse for motivated reasoning or both". But I don't think this is damning toward EA because downsides of a misfire in a brittle theory of change don't seem super important for most longtermist interventions (your pandemic preparedness scheme might accidentally abolish endemic flu, so your miscalculation about the harm or likelihood of a way-worse-than-covid is sort of fine). Whereas in politics the brittleness of the theory of change means you can be well-meaningly harmful, which is kinda the central point of anything involving "politics" at all. 

Certainly this is not robust to all longtermist interventions, but I find very convincing for the average case. 

AI safety has important potential backfire risks, like accelerating capabilities (or causing others to, intentionally or not), worsening differential progress, backfire s-risks. I know less about biorisk, but there are infohazards there, so that bringing more attention to biorisk can also increase the risk of infohazards leaking or search for them.

I think a separate but plausibly better point is the "memetic gradient" is characterized in known awful ways for politics, and many longtermist theories of change offer an opportunity for something better. If you pursue a political theory of change, you're consenting to a relentless onslaught of people begging you to make your epistemics worse on purpose. This is a perfectly good reason not to sign up for politics, the longtermist ecosystem is not immune to similar issues but it seems certainly like there's a fighting chance or that it's the least bad of all options. 

[comment deleted]9mo2
0
0
Curated and popular this week
Relevant opportunities