P

pappubahry

125 karmaJoined Sep 2014

Posts
2

Sorted by New

Comments
58

But maybe looking at leadership is the wrong way around, and it's the rank-and-file members who led the charge.

Speaking from my geographically distant perspective: I definitely saw it as a leader-led shift rather than coming from the rank-and-file.  There was always a minority of rank-and-file coming from Less Wrong who saw AI risk as supremely important, but my impression was that this position was disproportionately common in the (then) Centre for Effective Altruism, and there was occasional chatter on Facebook (circa 2014?) that some people there saw the global poverty cause as a way to funnel people towards AI risk.

I think the AI-risk faction started to assert itself more strongly in EA from about 2015, successfully persuading other major leader figures one by one over the following years (e.g. Holden in 2016, as linked to by Carl).  But by then I wasn't following EA closely, and I don't have a good sense of the timeline.

There's an accompanying column in The Guardian:

Running with MacAskill’s line of reasoning, we asked participants in this week’s Guardian Essential poll to think through whether future time horizons would be positive or negative for humanity (although we confined our frame to a relatively conservative ten millennia).

If I were debating you on the topic, it would be wrong to say that you think it's a Pascal's mugging. But I read your post as being a commentary on the broader public debate over AI risk research, trying to shift it away from "tiny probability of gigantic benefit" in the way that you (and others) have tried to shift perceptions of EA as a whole or the focus of 80k. And in that broader debate, Bostrom gets cited repeatedly as the respectable, mainstream academic who puts the subject on a solid intellectual footing.

(This is in contrast to MIRI, which as SIAI was utterly woeful and which in its current incarnation still didn't look like a research institute worthy of the name when I last checked in during the great Tumblr debate of 2014; maybe they're better now, I don't know.)

In that context, you'll have to keep politely telling people that you think the case is stronger than the position your most prominent academic supporter argues from, because the "Pascal's mugging" thing isn't going to disappear from the public debate.

The New Yorker writer got it straight out of this paper of Bostrom's (paragraph starting "Even if we use the most conservative of these estimates"). I've seen a couple of people report that Bostrom made a similar argument at EA Global.

I get what you're saying, but, e.g., in the recent profile of Nick Bostrom in the New Yorker:

No matter how improbable extinction may be, Bostrom argues, its consequences are near-infinitely bad; thus, even the tiniest step toward reducing the chance that it will happen is near-­infinitely valuable. At times, he uses arithmetical sketches to illustrate this point. Imagining one of his utopian scenarios—trillions of digital minds thriving across the cosmos—he reasons that, if there is even a one-per-cent chance of this happening, the expected value of reducing an existential threat by a billionth of a billionth of one per cent would be worth a hundred billion times the value of a billion present-day lives. Put more simply: he believes that his work could dwarf the moral importance of anything else.

While the most prominent advocate in the respectable-academic part of that side of the debate is making Pascal-like arguments, there's going to be some pushback about Pascal's mugging.

I confess I'm a bit surprised no one else has linked this yet

Judging by GiveWell's Twitter and Facebook feeds, the post is mis-dated -- it only went live about 8 hours ago (at time of writing my comment), rather than 2 or 3 days ago.

I think this is referring to a common probability question, e.g., example 3 here.

Thanks Peter! I'll make the top-level post later today.

How did you do that so quickly?

(I might have given the impression that I did this all during a weekend. This isn't quite right -- I spent 2-3 evenings, about 8 hours in total, going from the raw csv files to nice and compact .js function. Then I wrote the plotter on the weekend.)

I did this bit in Excel. If the money amounts were in column A, I insert three columns to the right: B for the currency (assumed USD unless otherwise specified), C for the min of the range given, D for the max. In column C, I started with =IF(ISNUMBER(A2), A2, "") and dragged that formula down the column. Then I went through line by line, reading off any text entries, and turning them into currency/min/max (if a single value was reported, I entered it as the min, and left the max blank). currency, tab, number, enter, currency, tab, number, tab, number, enter, currency, tab...

It's not a fun way to spend an evening (hence why I didn't do the lifetime donations as well), but it doesn't actually take that long.

Then: new column E for the AVERAGE(C2:D2) dragged down the column. Then I typed in the average currency conversions for 2013 into a new sheet and did a lookup (most users would use VLOOKUP I think, I used MATCH and OFFSET) to get my final USD numbers in column F.

Also, do you have the GitHub code for your plotter?

As a fierce partisan of the "_final_really_2" school of source control, I'm yet to learn how to GitHub. You can view the Javascript source easily enough though, and save it locally. (I suggest deleting the Google Analytics "i,s,o,g,r,a,m" script if you do this, or your browser might go looking for Google in your file system for a few seconds before plotting the graphs). The two scripts not in the HTML file itself are d3.min.js and ea_survey.data.js. (EDIT: can't be bothered fixing the markdown underscores here.)

A zip file with my ready-to-run CSV file and the R script to turn it into a Javascript function is here.

I've made a bar chart plotter thing with the survey data: link.

The first 17 entries in imdata.csv have some mixed-up columns, starting (at latest) from

Have you volunteered or worked for any of the following organisations? [Machine Intelligence Research Institute]

until (at least)

Over 2013, which charities did you donate to? [Against Malaria Foundation].

Some of this I can work out (volunteering at "6-10 friends" should obviously be in the friends column), but the blank cells under the AMF donations have me puzzled.

Load more