Ardenlk

Comments

What are you grateful for?

I'm grateful for all the people in the EA community who write digests, newsletters, updates, highlights, research summaries, abstracts, and other vehicles that help me keep abreast of all the various developments.

I'm also grateful for there being so much buzzing activity in EA that such vehicles are so useful/essential!

Where are you donating in 2020 and why?

I am not that confident this was the right decision (and will be curious about people's views, though I can't do anything about it now), but I already gave most of 10% of my income this year (as per my GWWC pledge) to the 'Biden Victory Fund.' (The rest went to the Meta Fund earlier in the year). I know Biden's campaign was the opposite of neglected, but I thought the imporance and urgency of replacing Trump as the US president swamped that consideration in the end (I think having Republicans in the White House, and especially Trump, is very bad for the provision of global public and coordination-relient goods). I expect to go back to giving to non-political causes next year.

I am still considering giving to the Georgia senate race with some of my budget for next year, because it seems so high 'leverage' on US electoral reform, which would (I think) make it easier for Democrats to get elected in the future and (I hope) make the US's democracy function better long-term. For example, there's an electoral reform bill that seems much more likely to pass if Democrats control the senate.

The quality of these choices depends on substantive judgements that in US politics Democrats make better choices for the world than Republicans, and that continued US global leadership would be better than the alternative with regard to things like climate change, AI, and biorisks. I think both of these things are true, but could be wrong!

What actually is the argument for effective altruism?

I think adding a maximizing premise like the one you mention could work to assuage these worries.

How have you become more (or less) engaged with EA in the last year?

Thanks this is super helpful -- context is I wanted to get a rough sense of how doable this level of "getting up to speed" is for people.

How have you become more (or less) engaged with EA in the last year?

Hey Michael, thanks for detailing this. Do you have a sense of how long this process took you approximately?

80,000 Hours user survey closes this Sunday

Thanks for filling out the survey and for the kind words!

Asking for advice

I wonder whether other people also like to have deadlines asked for for their feedback or have specific dates suggested for meeting? Sometimes I prefer to have someone ask for feedback within a week than within 6 months (or as soon as is convenient), because it forces me to get it off my to-do list. Though it's best of both worlds if they also indicate that if I can't do it in that time it's ok.

EA reading list: Scott Alexander

Thanks! This post caused me to read 'beware systemic change', which I hadn't before and am glad I did.

I know this post isn't about that piece specifically, but I had a reaction and I figured 'why not comment here? It's mostly to record my own thoughts anyway.'

It seems like Scott is associating a few different distinctions with the distinction between the titular distinction, (1) 'systemic vs. non-systemic'

These are: (2) not necessarily easy to measure vs. easy to measure (3) controversial ('man vs. man') vs. universially thought of as good or neutral.

These are related but different. I think the thing that actually produces the danger Scott is worired about is (3). (Of course you could worry that movement on (2) will turn EA into an ineffectual, wishy-washy movement, but that doesn't seem as much Scott's concern)

I asked myself: to what extent has EA (as it promised to in 2015) moved toward systemic change? Toward change that's not necessarily easy to measure? Toward controversial change?

80K's top priority problem areas (causes) are:

  • AI safety (split into tech safety and policy)
  • Biorisk
  • Building EA
  • global priorities research
  • improving inst decisionmaking
  • preventing extreme climate change
  • preventing nuclear war

These are all longtermist causes. Then there's the other two very popular EA causes:

  • ending factory farming
  • global health

Of the issues on this list, only the AI policy bit of AI safety and bulding EA seem to be particularly controversial change. I say AI policy is controversial becuase it favors the US over China as practiced by EA, and presumably people in China would think that's bad, and building EA seems controversial because some people think EA is deeply confused/bad (though it's not as controversial as the stuff Scott mentions in the post I think). But 'building EA' was always a cause within EA, so only the investment in AI policy represents a move towrard the controversial since Scott's post.

(Though maybe I'm underestimating the controversialness of things like ending factory farming -- obviously some people think that'd be positively bad...but I guess I'd guess that's more often of the 'this isn't the best use of resources' variety of positive badness.)

Of the problems listed above, only ending factory farming and improving global health are particularly measurable. So it does seem like we've moved toward the less-easily-measured (with the popularization of longtermism probably).

Are any of the above 'systemic'? Maybe Scott associated this concept with the left halves of distinctions (2) and (3) because it's harder to tell what's systemic vs. not. But I guess I'd say again the AI policy half of AI safety, builidng EA, and improving institutional decisionmaking are systemic issues. (Though maybe systemic interventions will be needed to address some of the others, e.g., nuclear security.)

So it's kind of interesting that even though EA promised to care about systemic issues, it mostly didn't expand into them, and only really expanded into the less easily measurble. Hopefully Scott would also be heartened that the only substantial expansion into the realm of the controversial seems to be AI policy.

If that's right as a picture of EA, why would that be? Maybe because although EA has tried to tackle a wider range of kinds of issues, it's still pretty mainstream within EA that working on politically controversial causes is not particularly fruitful. Maybe because people are just better than Scott seems to think they are at taking into account the possibility of being on the wrong side of stuff when directly estimating the EV on working on causes, which has resulted in shying away from controversial issues.

In part 2 of Scott's post there's the idea that if we pursue systemic change we might turn into something like the Brookings institution, and that that would be bad because we'd lose our special moral message. I feel a little unsure of what the special moral message is that Scott is referring to in the post that is necessarily different between brookings-EA and bednet-EA, but I think it has something to do with stopping periodically and saying "Wait are we getting distracted? Do we really think that this thing is the most good we can do with $2,000 when we could with high confidence save someone's life if we gave it to AMF instead?" At least, that's the version of the special moral message that I agree is really important and distinctive.

Load More