Comments1
Sorted by Click to highlight new comments since: Today at 3:30 PM

Thanks! This post caused me to read 'beware systemic change', which I hadn't before and am glad I did.

I know this post isn't about that piece specifically, but I had a reaction and I figured 'why not comment here? It's mostly to record my own thoughts anyway.'

It seems like Scott is associating a few different distinctions with the distinction between the titular distinction, (1) 'systemic vs. non-systemic'

These are: (2) not necessarily easy to measure vs. easy to measure (3) controversial ('man vs. man') vs. universially thought of as good or neutral.

These are related but different. I think the thing that actually produces the danger Scott is worired about is (3). (Of course you could worry that movement on (2) will turn EA into an ineffectual, wishy-washy movement, but that doesn't seem as much Scott's concern)

I asked myself: to what extent has EA (as it promised to in 2015) moved toward systemic change? Toward change that's not necessarily easy to measure? Toward controversial change?

80K's top priority problem areas (causes) are:

  • AI safety (split into tech safety and policy)
  • Biorisk
  • Building EA
  • global priorities research
  • improving inst decisionmaking
  • preventing extreme climate change
  • preventing nuclear war

These are all longtermist causes. Then there's the other two very popular EA causes:

  • ending factory farming
  • global health

Of the issues on this list, only the AI policy bit of AI safety and bulding EA seem to be particularly controversial change. I say AI policy is controversial becuase it favors the US over China as practiced by EA, and presumably people in China would think that's bad, and building EA seems controversial because some people think EA is deeply confused/bad (though it's not as controversial as the stuff Scott mentions in the post I think). But 'building EA' was always a cause within EA, so only the investment in AI policy represents a move towrard the controversial since Scott's post.

(Though maybe I'm underestimating the controversialness of things like ending factory farming -- obviously some people think that'd be positively bad...but I guess I'd guess that's more often of the 'this isn't the best use of resources' variety of positive badness.)

Of the problems listed above, only ending factory farming and improving global health are particularly measurable. So it does seem like we've moved toward the less-easily-measured (with the popularization of longtermism probably).

Are any of the above 'systemic'? Maybe Scott associated this concept with the left halves of distinctions (2) and (3) because it's harder to tell what's systemic vs. not. But I guess I'd say again the AI policy half of AI safety, builidng EA, and improving institutional decisionmaking are systemic issues. (Though maybe systemic interventions will be needed to address some of the others, e.g., nuclear security.)

So it's kind of interesting that even though EA promised to care about systemic issues, it mostly didn't expand into them, and only really expanded into the less easily measurble. Hopefully Scott would also be heartened that the only substantial expansion into the realm of the controversial seems to be AI policy.

If that's right as a picture of EA, why would that be? Maybe because although EA has tried to tackle a wider range of kinds of issues, it's still pretty mainstream within EA that working on politically controversial causes is not particularly fruitful. Maybe because people are just better than Scott seems to think they are at taking into account the possibility of being on the wrong side of stuff when directly estimating the EV on working on causes, which has resulted in shying away from controversial issues.

In part 2 of Scott's post there's the idea that if we pursue systemic change we might turn into something like the Brookings institution, and that that would be bad because we'd lose our special moral message. I feel a little unsure of what the special moral message is that Scott is referring to in the post that is necessarily different between brookings-EA and bednet-EA, but I think it has something to do with stopping periodically and saying "Wait are we getting distracted? Do we really think that this thing is the most good we can do with $2,000 when we could with high confidence save someone's life if we gave it to AMF instead?" At least, that's the version of the special moral message that I agree is really important and distinctive.

Curated and popular this week
Relevant opportunities