All of Paul_Crowley's Comments + Replies

>> The group teaches or implies that its supposedly exalted ends justify means that members would have considered unethical before joining the group (for example: collecting money for bogus charities).

> Partial (+0.5)

This seems too high to me, I think 0.25 at most. We're pretty strong on "the ends don't justify the means".

>>The leadership induces guilt feelings in members in order to control them.

> No

This on the other hand deserves at least 0.25...

*loads* of people saw the title and thought "oh, this is a book about how AI is Good, Actually". For anyone who doesn't know, the full quote is Eliezer's: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.". I much preferred the old title but I guess I shouldn't be surprised people didn't get it!

"ultimately I made offers to two candidates both of which I had had strong gut feelings about very early, which was rewarding but also highly frustrating." - I hope this comment doesn't come across as incredibly mean, but, are you getting that from notes made at the time? When I find myself thinking "this is what I thought we'd do all along", I start to suspect I've conveniently rewritten my memories of what I thought. Do you have a sense of how many candidates you had similar strong positive gut feelings about?

Thank you for a very helpful comment!

2
agdfoster
5y
haha - good question. And yes, from notes.

When I applied to Google I did a phone interview and a full day of in-person interviews, plus a 1-hour conference call about how to do well in the second round. Lots of people devote significant time brushing up their coding interview skills as well; I only didn't because things like Project Euler had brushed up those skills for me.

Of course, the one who writes the post about it is likely to be the outlier rather than the median.

If you can't afford it, doesn't that suggest that earning to give might not be such a bad choice after all?

Yes. Earning to give is a good choice and I've not suggested otherwise.

Could you comment specifically on the Wayback Machine exclusion? Thanks!

Nitpick: "England" here probably wants to be something like "the south-east of England". There's not a lot you could do from Newcastle that you couldn't do from Stockholm; you need to be within travel distance of Oxford, Cambridge, or London.

1
Raemon
7y
Thanks, fixed. Actually, is anyone other than DeepMind in London? (the section I brought this up was on volunteering, which I assume is less relevant for DeepMind than FHI)

You have a philosopher's instinct to reach for the most extreme example, but in general I recommend against that.

There's a pretty simple counterfactual: don't take or promote the pledge.

2
Kit
7y
Haven't you just chosen precisely the most extreme counterfactual? Now you have to defend the view that Giving What We Can, run by very smart people who test what they're doing, is causing net harm in expectation.

I went to a MIRI workshop on decision theory last year. I came away with an understanding of a lot of points of how MIRI approaches these things that I'd have a very hard time writing up. In particular, at the end of the workshop I promised to write up the "Pi-maximising agent" idea and how it plays into MIRI's thinking. I can describe this at a party fairly easily, but I get completely lost trying to turn it into a writeup. I don't remember other things quite as well (eg "playing chicken with the Universe") but they have the same feel. An awful lot of what MIRI knows seems to me folklore like this.

2
Owen Cotton-Barratt
7y
This is interesting and interacts with my comment in reply to Anna on clarity of communication. I think I'd like to see them write up more such folklore as carefully as possible; I'm not optimistic about attempts to outsource such write-ups.

I think being too nice is a failure mode worth worrying about, and your points are well taken. On the other hand, it seems plausible to me that it does a more effective job of convincing the reader that Gleb is bad news precisely by demonstrating that this is the picture you get when all reasonable charity is extended.

I strongly suspect that the group photo is of very high value in getting people to go, making them feel good about having gone, and making others feel good about the conference. However, it sounds like trying to optimize to shave a few minutes off would be pretty high value.

1
Patrick
8y
I felt that the group photo was a waste of my time because I wasn't visible to the camera. But if I hadn't participated I suppose someone else might've gotten my bad spot.
1
Ozzie Gooen
8y
As a reductionist I'd be equally satisfied with a photoshopped image of everyone's online face cropped together, but realize that most others probably don't feel that way :)

What is remarkable about this, of course, is the recognition of the need to address it.

I agree with your second point but not your first. Also it's possible you mean "optimistic" in your second point: if x-risks themselves are very small, that's one way for the change in probability as a result of our actions to be very small.

1
kbog
8y
I mean pessimism about the importance of x-risk research, which is more or less equivalent to optimism about the future of humanity. Similar idea.

Where the survey says 2014, do you mean 2015?

2
David_Moss
8y
We're intentionally asking about 2014 in this survey. (Last year, in the 2014 survey, we asked about 2013). This year's survey is just being released relatively late in the year (although, as you may have noticed, this survey was released earlier via Facebook).

Yes, I'd treat the ratio of brain masses as a lower bound on the ratio of moral patient-ness.

Tax complicates this. If I'm in a higher tax band than you, I can make a donation to charity more cheaply than you can, so you will "receive" more than I "give", and vice versa.

It seems a bit like the question behind the question might be "I'd like to help, but I don't know formal logic, when will that stop being a barrier". In which case it's worth saying that I'm attending a MIRI decision theory workshop at the moment, and I don't really know formal logic, but it isn't proving too much of a barrier; I can think about the assertion "Suppose PA proves that A implies B" without really understanding exactly what PA is.

Thanks for the encouragement!

I wonder if you can do something with a different kind of disaster? Maybe make it a coach that can get people out of the danger zone? Or is that cheating because people don't want seats to be "wasted"?

I've been trying to work out how to sell EA in the form of a parable; let me illustrate with my current best candidate.

In a post-apocalyptic world, you're helping get the medicine that cures the disease out to the people. You know that there's a truck with the medicine on the way, and it will soon reach a T-junction. The truck doesn't know who is where and its radio is broken; you're powerless to affect what it does, watching with binoculars from far away. If it turns left, it'll be flagged down by a family of four and their lives will be saved. If it turn... (read more)

0
RyanCarey
9y
Here's a continuation of this kind of discussion: The EA Pitch guide
1
RyanCarey
9y
The exercise seems useful. I agree that making it not a choice between A and A+B is fairer. Also, saying that they're a witness, and can't actually make any decision might help with switching off guilt relating to a taboo tradeoff. I agree that the problem is that the current example is too contrived, though I haven't yet thought of a more ordinary example. Scott Siskind's Arctic exploration analogy is the closest I know.

Yes, please do do a proper post on this with cites etc, I think this is really valuable!

In 2015 I'll be donating 10% of my salary to the Centre for the Study of Existential Risk. CSER is a particularly good giving opportunity right now: with such superb academic bona fides, it has the potential to hugely raise the profile of the study of existential risk and boost the whole field of future-oriented work, so if you are at all moved by the idea of the overwhelming importance of shaping the far future then CSER is well worth considering as a recipient. It's a particularly good cause for me to give to because I'm in the UK, so there are substanti... (read more)

I finally got to asking my partner if she would be OK with me sending 10% of my salary to charity (we have shared finances, so my money is hers too) and she said yes right away. Will start doing that from my December paycheck on. I'm finally an EA! In other news, I made a payment to CSER on Thursday which after tax and employer gift matching is sorted out should be worth £6500; I've been setting the money aside all year while kicking my employer and the University of Cambridge slowly along the gift matching progress.

EDIT: I wrote CFAR, but I meant CSER! Fixed.

0
RyanCarey
9y
Awesome! Although you were already an EA ;)