lukeprog

3736Joined Aug 2014

Comments
218

Another historical point I'd like to make is that the common narrative about EA's recent "pivot to longtermism" seems mostly wrong to me, or at least more partial and gradual than it's often presented to be, because all four leading strands of EA — (1) neartermist human-focused stuff, mostly in the developing world, (2) animal welfare, (3) long-term future, and (4) meta — were all major themes in the movement since its relatively early days, including at the very first "EA Summit" in 2013 (see here), and IIRC for at least a few years before then.

What's his guess about how "% of humans enslaved (globally)" evolved over time? See e.g. my discussion here.

How many independent or semi-independent abolitionist movements were there around the world during the period of global abolition, vs. one big one that started with Quakers+Britain and then was spread around the world primarily by Europeans? (E.g. see footnote 82 here.)

Re: more neurons = more valenced consciousness, does the full report address the hidden qualia possibility? (I didn't notice it at a quick glance.) My sense was that people who argue for more neurons = more valenced consciousness are typically assuming hidden qualia, but your objections involving empirical studies are presumably assuming no hidden qualia.

I really appreciate this format and would love to see other inaccurate articles covered in this way (so long as the reviewer is intellectually honest, of course).

I suspect this is because there isn't a globally credible/legible consensus body generating or validating the forecasts, akin to the IPCC for climate forecasts that are made with even longer time horizons.

Cool, I might be spending a few weeks in Belgrade sometime next year! I'll reach out if that ends up happening. (Writing from Dubrovnik now, and I met up with some rationalists/EAs in Zagreb ~1mo ago.)

(cross-posted)

Re: Shut Up and Divide. I haven't read the other comments here but…

For me, effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don't want to help strangers, animals, future people, etc. But I think I "want to want to" help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do the thing consistent with my second-order desires, something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don't really detect in myself a symmetrical second-order want to NOT want to help strangers. So that's one thing that "Shut up and multiply" has over "shut up and divide," at least for me.

That said, I realize now that I'm often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor's occasional desire to help strangers and suggest they generalize it, but I don't symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that's a more complicated conversation.

FWIW I generally agree with Eli's reply here. I think maybe EAG should 2x or 3x in size, but I'd lobby for it to not be fully open.

Load More