All of Nick_Beckstead's Comments + Replies

Democratising Risk - or how EA deals with critics

Hi Carla and Luke, I was sad to hear that you and others were concerned that funders would be angry with you or your institutions for publishing this paper. For what it's worth, raising these criticisms wouldn't count as a black mark against you or your institutions in any funding decisions that I make. I'm saying this here publicly in case it makes others feel less concerned that funders would retaliate against people raising similar critiques. I disagree with the idea that publishing critiques like this is dangerous / should be discouraged.

+1, EA Funds (which I run) is interested in funding critiques of popular EA-relevant ideas.

+1 to everything Nick said, especially the last sentence. I'm glad this paper was published; I think it makes some valid points (which doesn't mean I agree with everything), and I don't see the case that it presents any risks or harms that should have made the authors consider withholding it. Furthermore, I think it's good for EA to be publicly examined and critiqued, so I think there are substantial potential harms from discouraging this general sort of work.

Whoever told you that funders would be upset by your publishing this piece, they didn't speak for Open Philanthropy. If there's an easy way to ensure they see this comment (and Nick's), it might be helpful to do so.

Thanks for saying this publically too Nick, this is helpful for anyone who might worry about funding. 

The EA Community and Long-Term Future Funds Lack Transparency and Accountability

Hi Evan, let me address some of the topics you’ve raised in turn.

Regarding original intentions and new information obtained:

  • At the time that the funds were formed, it was an open question in my mind how much of the funding would support established organizations vs. emerging organizations.
  • Since then, the things that changed were that EA Grants got started, I encountered fewer emerging organizations that I wanted to prioritize funding than expected, and Open Phil funding to established organizations grew more than I expected.
  • The three factors contribute
... (read more)
1Evan_Gaensbauer3y[Part I of II] Thank you for your thoughtful response. As far as I'm concerned, these factors combined more than exonerate you from aspersions you were in acting in bad faith in the management of either these funds. For what it's worth, I apologize you've had to face such accusations in the comments below as a result of my post. I hoped for the contrary, as I consider such aspersions at best counterproductive. I expect I'll do a follow-up as a top-level post to the EA Forum, in which case I'll make abundantly clear I disbelieve you were acting in bad faith, and that, if anything, it's as I expected: what's happened is a result of the CEA failing to ensure you as a fund manager and the EA Funds were in sufficiently transparent and regular communication with the EA community, and/or donors to these funds. Personally, I disagree with a perspective the Long-Term and EA Community Funds should be operated differently than the other two funds, i.e., seeking to fund well-established as opposed to nascent EA projects/organizations. I do so while also agreeing it is a much better use of your personal time to focus on making grants to established organizations, and follow the cause prioritization/evaluation model you've helped develop and implement at Open Phil. I think one answer is for the CEA to hire or appoint new/additional fund managers for one or both of the Long-Term Future and EA Community Funds to relieve pressure on you to do everything, both dividing your time between the Funds and your important work at Open Phil less than now, and to foster more regular communication to the community regarding these Funds. While I know yourself and Benito commented it's difficult to identify someone to manage the funds both the CEA and EA community at large would considered qualified, I explained my conclusion in this comment [http://effective-altruism.com/ea/1qx/the_ea_community_and_far_future_ea_funds_are_not/f1z] as to why I think it's both important and tractable for us
The EA Community and Long-Term Future Funds Lack Transparency and Accountability

Thanks for sharing your concerns, Evan. It sounds like your core concerns relate to (i) delay between receipt and use of funds, (ii) focus on established grantees over new and emerging grantees, and (iii) limited attention to these funds. Some thoughts and comments on these points:

  • I recently recommended a series of grants that will use up all EA Funds under my discretion. This became a larger priority in the last few months due to an influx of cryptocurrency donations. I expect a public announcement of the details after all grant logistics have been comp

... (read more)

Hi Nick. Thanks for your response. I also appreciate the recent and quick granting of the EA Funds up to date. One thing I don't understand is why most of the grants you wanted to make could have been made by the Open Philanthropy Project, is why:

  • the CEA didn't anticipate this;
  • gave public descriptions of how the funds you managed would work to the contrary;
  • and why, if they learned of your intentions contrary to what they first told the EA community, they didn't issue an update.

I'm not aware of a public update of that kind. If there was a private e... (read more)

7Peter Wildeford3yHey Nick, I'm excited to hear you've made a bunch of grants. Do you know when they'll be publicly announced?
8Milan_Griffes3yCould you sketch out what "suitable qualifications" for the fund manager role look like, roughly?
Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

In addition to, 35 days total. (I work at Open Phil.)

Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

I don't mean to make a claim re: averages, just relaying personal experience.

Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

I am a Program Officer at Open Philanthropy who joined as a Research Analyst about 3 years ago.

The prior two places I lived were New Brunswick, NJ and Oxford, UK. I live in a house with a few friends. It is 25-30m commute door-to-door via BART. My rent and monthly expenses are comparable to what I had in Oxford but noticeably larger than what I had in New Brunswick. I got pay increases when I moved to Open Phil, and additional raises over time. I’m comfortable on my current salary and could afford to get a single-bedroom apartment if I wanted, but I’m happy where I am.

Overall, I would say that it was an easy adjustment.

2Benjamin_Todd4ySurely rent is much higher than Oxford on average? It's possible to get a great place in Oxford for under £700 per month, while comparable in SF would be $1300+. Food also seems about 30% more expensive, and in Oxford you don't have to pay for a commute. My overall guess is that $80k p.a. in SF is equivalent to about £40k p.a. in Oxford.
How important is marginal earning to give?

To avoid confusing people: my own annual contributions to charity are modest.

0RyanCarey7yWait, I meant Matt Wage. Why did I write Nick Beckstead???
Should we launch a podcast about high-impact projects and people?

You might consider having a look at http://www.flamingswordofjustice.com/ . It's a podcast of interviews with activists of various types (pretty left-wing). I've listened to a few episodes and found it interesting. It was the closest thing I could think of that already exists.

Open Thread

I would love to see some action in this space. I think there is a natural harmony between what is best in Christianity--especially regarding helping the global poor--and effective altruism.

One person to consider speaking with is Charlie Camosy, who has worked with Peter Singer in the past (see info here). A couple other people to consider talking with would be Catriona Mackay and Alex Foster.

Cosmopolitanism

One attractive feature about cosmopolitanism in contrast with impartial benevolence is that impartial benevolence is often associated with denying that loved ones and family members are worthy targets of special concern, whereas I don't think cosmopolitanism has such associations. Another is that I think a larger fraction of educated people already have some knowledge about cosmopolitanism.

Good policy ideas that won’t happen (yet)

Niel, thanks for writing up this post. I think it's really worthwhile for us to discuss challenges that we encounter while working on EA projects with the community.

I noticed that this link in this sentence is broken:

Creating more disaster shelters to protect against global catastrophic risks (too weird)

0Niel_Bowerman7yThanks Nick. There seems to be a problem with the way the forum currently references the effective-altruism.com URL. I've directed the link to the post on the trikeapps site as a temporary workaround. It may break once the problem with the effective-altruism.com URLs is fixed.
Conversation with Holden Karnofsky, Nick Beckstead, and Eliezer Yudkowsky on the "long-run" perspective on effective altruism

After thinking about this later, I noticed that one of my claims was wrong. I said:

> Though I’m not particularly excited about refuges, they might be a good test case. I think that if you had this 5N view, refuges would be obviously dumb but if you had the view that I defended in my dissertation then refuges would be interesting from a conceptual perspective.

But then I ran some numbers and this no longer seemed true. If you assumed a population of 10B, an N of 5, a cost of your refuge of $1B, that your risk of doom was 1%, and that your refuge could cut... (read more)

0Owen_Cotton-Barratt7yThanks for this clarification. After reading the emails I wanted to make exactly this point! I do think that comparing how good saving a life today is compared to doing something like building bunkers to reduce risk really comes down to an understanding of the world today rather than an understanding of exactly how big the future might be (after you grant that it could be very big). Though choosing 5 as a multiplier looks rather low to me; I'd be happier with something up in 100-1000 range (and I wouldn't be surprised if my view of the correct figure to use there changes substantially in the future).
A relatively atheoretical perspective on astronomical waste

I think it's an open question whether "even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future." But I broadly agree with the other points. In a recent talk on astronomical waste stuff, I recommended thinking about AI in the category of "long-term technological/cultural path dependence/lock in," rather than the GCR category (though that wasn't the main point of the talk). Link here: http://www.gooddoneright.com/#!nick-beckstead/cxpp, see slide 13.

A relatively atheoretical perspective on astronomical waste

Re 1, yes it is philosophically controversial, but it also does speak to people with a number of different axiologies, as Brian Tomasik points out in another comment. One way to frame it is that it's doing what separability does in my dissertation, but noticing that astronomical waste can run without making assumptions about the value of creating extra people. So you could think of it as running that argument with one less premise.

Re 2, yes it pushes in an unbounded utility function direction, and that's relevant if your preferred resolution of Pascal's Mu... (read more)

0Owen_Cotton-Barratt7yYes, I really like this work in terms of pruning the premises. Which is why I'm digging into how firm those premises really are (even if I personally tend to believe them). It seems like the principle of scale is in fact implied by separability. I'd guess it's rather weaker, but I don't know of any well-defined examples which accept scale but not separability. I do find your framing of 3 a little suspect. When we have a solid explanation for just why it's great in ordinary situations, and we can see that this explanation doesn't apply in strange situations, it seems like the extrapolation shouldn't get too much weight. Actually most of my weight for believing the principle of scale comes the fact that it's a consequence of separability. One more way the principle might break down: 4) You might accept the principle for helping people at a given time, but not as a way of comparing between helping people at different times. Indeed in this case it's not so clear most people would accept the small-scale version (probably because intuitions are driven by factors such as improving lives earlier gets more time to have indirect effects acting to improve lives later).
Will we eventually be able to colonize other stars? Notes from a preliminary review

I haven't done a calculation on that, but I agree it's important to consider. Regarding your calculation, a few of these factors are non-independent in a way that favors space colonization. Specifically:

Speeding up and slowing down are basically the same, so you should just treat that as one issue. Fitting everything you need into the spaceship and being able to build a civilization when you arrive are very closely related. Having your stuff survive the voyage and being able to build a civilization in a hostile environment are closely related. I would ... (read more)

This post appears to be incomplete.

A Long-run perspective on strategic cause selection and philanthropy

I agree that a choice of discount rate is fundamentally important in this context. If you did the standard thing of choosing a constant discount rate (e.g. 5%) and used that for all downstream benefits, even ones millions of years into the future, that would make helping future generations substantially less important. By emphasizing the distinction between pure discounting and discounting as a computational convenience, I did not mean to suggest that views about how to discount future benefits were unimportant.

I was distinguishing between two possible mot... (read more)

0rovingbandit8yLet me explain my position - first, I agree with rejecting a pure time preference, and instead doing discounting based primarily on expected growth in incomes. For me, the expectation that in 50 years the average person could easily be twice as wealthy, leads to quite heavy discounting of investment to improve their welfare vs spending to alleviate suffering from extreme poverty right now. It's possible I haven't thought this through thoroughly, and am explaining away my lack of enthusiasm for your choice of 5 causes to the neglect of the classic Givewell/GWWC choices. Perhaps there is something to do with efficacy there - that I'm unsure of the likely impact of funding immigration advocacy, forecasting, and more research.
A Long-run perspective on strategic cause selection and philanthropy

I like to distinguish between pure discounting and discounting as a computational convenience. By "pure discounting," I mean caring less about the very same benefit, which you'll get with certainty in the future, than a benefit you can get now. I see this as a values question, and my preference is to have a 0% pure discount rate. One might discount as a computational convenience to adjust for returns on investment from having benefits arrive earlier, uncertainty about the benefits arriving, changes in future wealth, or other reasons.

When you are... (read more)

1rovingbandit8yI think your choice of discount rate is going to fundamentally alter your investment decision, it's not just some kind of marginal technical tweak. In practice either you discount fairly heavily, as most public projects do, and end up putting most of your money into solving short-term suffering (as I think you should), or you discount lightly, and put most of your money into possible future catastrophic risk mitigation. I don't see how this is "computational convenience" - it's fundamental.
A Long-run perspective on strategic cause selection and philanthropy

Yes, discount rates are an important thing to discuss here. I briefly discuss them on pp. 63-64 of my dissertation (http://www.nickbeckstead.com/research). I endorse using discount rates on a case-by-case basis as a convenience for calculation, but count harms and benefits as, in themselves and apart from their consequences, equally important whenever they occur.

For further articulation of similar perspectives I recommend:

Cowen, T. and Parfit, D. (1992). Justice Between Age Groups and Generations, chapter Against the Social Discount Rate, pages 144–161. Ya... (read more)

1rovingbandit8yWhat do you mean by "using discount rates on a case-by-case basis as a convenience for calculation"? I don't find your dissertation discussion very convincing (but then I'm an economist). I worry a lot more about the existing real children with glass in their feet right now (or intestinal worms or malaria or malnutrition or whatever) than the hypothetical potential children of the future who don't exist yet, and in any case when they do will live in a substantially wealthier society in which everyone has access to good quality footwear.
A Long-run perspective on strategic cause selection and philanthropy

I broadly agree with Carl’s comment, though I have less of an opinion about the specifics of how you have done your learning grants. Part of your question may be, “Why would you do this if we’re already doing it?” I believe that strategic cause selection is an enormous issue and we have something to contribute. In this scenario, we certainly would want to work with you and like-minded organizations.

1Cari_Tuna28yHi Nick — I did not at all mean to imply, “Why would you do this if we’re already doing it?” I see enormous value in other people experimenting with strategic cause selection and was gratified to read this post. I simply was surprised that you didn’t mention that GiveWell Labs, by and large, is taking the approach you outlined, including investigating four of the five issue areas you mentioned. That made me think either we're not communicating well enough about what we’re doing, which seems likely, or that you see the two approaches as more different than I do.
A Long-run perspective on strategic cause selection and philanthropy

We think many non-human animals, artificial intelligence programs, and extraterrestrial species could all be of moral concern, to degrees varying based on their particular characteristics but without species membership as such being essential. Humanity is used alternately in the text with "civilization," a civilization for which humanity is currently in the driver's seat.