Hide table of contents

This is about informing how much I should defer to EA on what issues matter most. Is EA's turn to longtermism a good reason in itself for me to have turned to longtermism?

One story, the most flattering to EA, goes like this:

"EA is unusually good at 'epistemics' / thinking about things, because of its culture and/or who it selects for; and also the community isn't corrupted too badly by random founder effects and information cascades; and so the best ideas gradually won out among those who were well-known for being reasonable, and who spent tons of time thinking about the ideas. (E.g. Toby Ord convincing Will MacAskill, and a bit later Holden Karnofsky joining them.)"

Of course, there could be other stories to be told, to do with 'who worked in the same building as who' and 'what memes were rife in the populations that EA targeted outreach to' and 'what random contingent things happened, e.g. a big funder flipping from global health to animals and creating 10 new institutes' and 'who was on Felicifia back in the day' and 'did anyone actively try to steer EA this way'. Ideally, I'd like to run a natural experiment where we go back in time to 2008, have MacAskill and Ord and Bostrom all work in different countries rather than all in Oxford, and see what changes. (Possibly Peter Singer is a real-life instance of this natural experiment, akin to how the evolution of Australia's mammals, birds, and marsupials diverged from the rest of the world when the continent separated from Gondwanaland in the Mesozoic-Palaeocene epochs. Not that Peter is that old.)

But maybe looking at leadership is the wrong way around, and it's the rank-and-file members who led the charge. I'd be very interested to know if so. (One thing I could look at is 'how much did the sentiment on this forum lag or lead the messaging from the big orgs?')

I understand EA had x-risk elements from the very beginning (e.g. Toby Ord), but it was only in the late 2010s that it came to be the dominant strain. Most of us only joined the movement while this longtermist turn was already well underway — I took the GWWC pledge in 2014 but checked out of EA for a few years afterwards, returning in 2017 to find x-risk a lot more dominant, and the movement 2 to 3 times bigger — and we have no direct experience of the shift, so we can only ask our elders how it happened, and thence decide 'to what degree was the shift caused by stuff that seems correlated with believing true things?'. It would be a shame if anecdata about the shift were lost to cultural memory, hence this question.

26

0
1

Reactions

0
1
New Answer
New Comment

3 Answers sorted by

https://www.openphilanthropy.org/research/three-key-issues-ive-changed-my-mind-about/

Came here to cite the same thing! :) 

Note that Dustin Moskovitz says he's not a longtermist, and "Holden isn't even much of a longtermist."

Image

So my intuition is that the two main important updates EA has undergone are "it's not that implausible that par-human AI is coming in the next couple of decades" and "the world is in fact dropping the ball on this quite badly, in the sense that maybe alignment isn't super hard, but to a first approximation no one in the field has checked."

(Which is both an effect and a cause of updates like "maybe we can figure stuff out in spaces where the data is more indirect and hard-to-interpret", "EA should be weirder", "EA should focus more on research and intellectual work and technical work", etc.)

But I work in AI x-risk and naturally pay more attention to that stuff, so maybe I'm missing other similarly-deep updates that have occurred. Like, maybe there was a big update at some point about the importance of biosecurity? My uninformed guess is that if we'd surveyed future EA leaders in 2007, they already would have been on board with making biosecurity a top global priority (if there are tractable ways to influence it), whereas I think this is a lot less true for AI alignment.

My sense is it was driven largely by a perception of faster-than-expected progress in deep learning along with (per Carl's comment) a handful of key people prominently becoming more concerned with it.

There might also just have been a natural progression. Toby Ord was always concerned about it, and 80,000 Hours made it a focus from very early on. At one relatively early point I had the impression that they considered shifting someone from almost any career path into AI-related work as their primary metric for success. I couldn't justify that impression now, and suspect it's an unfair one, but I note it mainly as an anecdote that someone was able to form that impression, well before the 'longtermist turn'.

But maybe looking at leadership is the wrong way around, and it's the rank-and-file members who led the charge.

Speaking from my geographically distant perspective: I definitely saw it as a leader-led shift rather than coming from the rank-and-file.  There was always a minority of rank-and-file coming from Less Wrong who saw AI risk as supremely important, but my impression was that this position was disproportionately common in the (then) Centre for Effective Altruism, and there was occasional chatter on Facebook (circa 2014?) that some people there saw the global poverty cause as a way to funnel people towards AI risk.

I think the AI-risk faction started to assert itself more strongly in EA from about 2015, successfully persuading other major leader figures one by one over the following years (e.g. Holden in 2016, as linked to by Carl).  But by then I wasn't following EA closely, and I don't have a good sense of the timeline.

Comments5
Sorted by Click to highlight new comments since: Today at 5:33 AM

I came here after you did and don't have an answer, but I wanted to comment on this:

One story, the most flattering to EA, goes like this:

"EA is unusually good at 'epistemics' / thinking about things, because of its culture and/or who it selects for; and also the community isn't corrupted too badly by random founder effects and information cascades; and so the best ideas gradually won out among those who were well-known for being reasonable, and who spent tons of time thinking about the ideas. (E.g. Toby Ord convincing Will MacAskill, and a bit later Holden Karnofsky joining them.)"

  1. Can anyone give any outside-view reason to think EA is "unusually good at 'epistemics' / thinking about things", or that "the community isn't corrupted too badly by random founder effects and information cascades"?

  2. Pet peeve: "spent tons of time thinking about X" is a phrase I encounter often in EA, and for some reason it's taken to mean "have reached conclusions which are more likely to be true than those of relevant outside experts". I think time spent thinking about something is very much not indicative of being right about it. MacAskill and Ord, in my view, get some credit for their ideas as they are actual philosophers with the right qualifications for this job - not because they spent lots of time on it.

I'm not writing this as criticism of OP, as the story was given as a maximally charitable take on EA. What I'm saying is I think that story is extremely unrealistic.

Can anyone give any outside-view reason to think EA is "unusually good at 'epistemics' / thinking about things" [...]?

Here are some possible outside-view reasons, not saying any of them is necessarily true (though I suspect some probably are):

  • Maybe EAs (on average) have higher educational attainment than the population at large, and having higher educational attainment is correlated with better epistemics.
  • Maybe EAs write and read more about epistemics and related topics than the population at large, and ...
  • Maybe EAs would score better on a battery of forecasting questions than the population at large, and ...
  • Maybe EAs are higher earners than the population at large, and ...
  • Maybe EAs read more philosophy than the population at large, and ...

Of course it depends which group you compare to, and which thing people are meant to be thinking about.

Thanks. I was thinking more of the scientific establishment, or other professional communities and advocacy groups, or organisations like the Gates Foundation. Most of whom seem to have very different ideas from EA in some areas at least.

Edit to add: note that the claim is that EA is unusually good at these things.

Btw, I'm not sure why your comment got downvoted (I upvoted it), and would be curious to hear the reasoning of someone who downvoted.

Can anyone give any outside-view reason to think EA is "unusually good at 'epistemics' / thinking about things", or that "the community isn't corrupted too badly by random founder effects and information cascades"?

I have some evidence that it isn't: a commonly cited argument for the importance of AI research says nothing like what ~20-80% effective altruists think it does.

Curated and popular this week
Relevant opportunities