All of Nick_Beckstead's Comments + Replies

Some clarifications on the Future Fund's approach to grantmaking

Thanks for your comment! I wanted to try to clarify a few things regarding the two claims you see us as making.  I agree there are major benefits to providing feedback to applicants. But there are also significant costs, too, and I want to explain why it’s at least a non-obvious decision what the right choice is here.

On (1), I agree with Sam that it wouldn't be the right prioritization for our team right now to give detailed feedback to >1600 applications we rejected, and would cut into our total output for the year significantly. I think it could ... (read more)

3Ferenc Huszár1mo
Thanks for the response, and thanks for being open to improving your process, and I agree with many of your points about the importance of scaling teams cautiously.

A model that I heard TripleByte used sounds interesting to me.

I wrote a comment about TripleByte's feedback process here; this blog post is great too. In our experience, the fear of lawsuits and PR disasters from giving feedback to rejected candidates was much overblown, even at a massive scale. (We gave every candidate feedback regardless of how well they performed on our interview.)

Something I didn't mention in my comment is that much of TripleByte's feedback email was composed of prewritten text blocks carefully optimized to be helpful and non-offe... (read more)

7Tee2mo
Very much appreciate the considerate engagement with this. Wanted to flag that my primary response to your initial comment can be found here [https://forum.effectivealtruism.org/posts/hDK9CZJwH2Cqc9n9J/some-clarifications-on-the-future-fund-s-approach-to?commentId=6uXp9kfKbXg5wuB7D] . All this makes a lot of sense to me. I suspect some people got value out of the presentation of this reasoning. My goal here was to bring this set of consideration to yours and Sam's attention and upvote its importance, hopefully it's factored into what is definitely non-obvious and complex to decide moving forward. Great to see how thoughtful you all have been and thanks again!
Some clarifications on the Future Fund's approach to grantmaking

We tend to do BOTECs when we have internal disagreement about whether to move forward with a large grant, or when we have internal disagreement about whether to fund in a given area. But this is only how we make a minority of decisions.

There are certain standard numbers I think about in the background of many applications, e.g. how large I think different classes of existential risks are and modifiers for how tractable I think they are. My views are similar to Toby Ord's table of risks in The Precipice. We don't have standardized and carefully explained es... (read more)

2MichaelStJules2mo
Do you have standard numbers for net x-risk reduction (share or absolute) for classes of interventions you fund, too?
Some clarifications on the Future Fund's approach to grantmaking

About 99% of applicants have received a decision at this point. The remaining 1% have received updates on when they should expect to hear from us next. Some of these require back-and-forth with the applicant and we can't unilaterally conclude the process with all the info we need. And in some of these cases the ball is currently in our court.

We will be reporting on the open call more systematically in our progress update which we publish in a month or so.

Some clarifications on the Future Fund's approach to grantmaking

Thanks for the thoughts, Irena! It's true that there are some proposals that did not receive decisions in 14 days and perhaps we should have communicated more carefully.

That said, I think if you look at the text on the website and compare it with what's happening, it actually matches pretty closely.

We wrote:

"We aim to arrive at decisions on most proposals within 14 days (though in more complex cases, we might need more time).

  • If your grant request is under $1 million, we understand it, we like it, and we don’t see potential for major downsides, it’ll probab
... (read more)
9Writer2mo
The indication I got said that FTX would reach out "within two weeks", which meant by April 20. I haven't heard back since, though. I reached out eight days ago to ensure that my application or relevant e-mails haven't been lost, but I haven't received an answer. :( (I get that this is probably not on purpose, and that grant decisions take as long as they need to, but if I see an explicit policy of "we are going to reach out even if we haven't made a decision yet" then I'm left wondering if something has broken down somewhere and about what to do. It seems a good choice to try to reach out myself... and comment under this thread to provide a data point.)
Some clarifications on the Future Fund's approach to grantmaking

Thanks for sharing your thoughts and concerns, Tee. I'd like to comment on application feedback in particular. It's true that we are not providing feedback on the vast majority of applications, and I can see how it would be frustrating and confusing to be rejected without understanding the reasons, especially when funders have such large resources at their disposal.

We decided not to give feedback on applications because we didn't see how to do it well and stay focused on our current commitments and priorities. We think it would require a large time investm... (read more)

Agree with this--it's impossible to give constructive feedback on thousands of applications.  The decision is between not giving grants, or accepting that most grant applications won't get much feedback from us.  We chose the latter.

Some clarifications on the Future Fund's approach to grantmaking

Makes sense! We are aiming to post a progress update in the next month or so.

Will FTX Fund publish results from their first round?

We're still finishing up about 30 more complicated applications (of ~1700 originally submitted). Then we're going to review the process, and share some of what we learned!

1barkbellowroar2mo
Sounds good, thanks for responding Nick!
Announcing the Future Fund

We don't know yet! We're finishing up about 30 more complicated applications (of ~1700 originally submitted), and then we're going to review the process and make a decision about this.

1kris.reiser23d
We were very excited about this new opportunity! Just checking in to see how/when the results would be communicated. We have our confirmation email with summary but haven't had any results yet. Would an update on the progress of the submissions be possible? Thank you!
The BEAHR: Dust off your CVs for the Big EA Hiring Round!

"Over the coming two weeks, the FTX foundation will be making its decisions and likely disbursing $100m, possibly more."

Just wanted to quickly correct this. Though we aim to give out at least $100M this year, we are not aiming to do that in this first call for proposals.

2Sanjay3mo
I was unsure about this, thanks for clarifying
Announcing the Future Fund

We have a number of entities we can use to provide funding, and which we use depends on the exact circumstances. It could be our non-profit entity FTX Foundation Inc or it could be a DAF of one of our board members or it could be something else if it's a for-profit investment. We will work with people we support to find the best way for them to receive the funding.

2Sanjay3mo
Thanks very much Nick. Is it possible to name one of the organisations providing the DAF? (e.g. is it National Philanthropic Trust, or Charities Aid Foundation, or whatever). Ideally if there's one in the UK, it would be great to name them, but failing that if you could provide the name of any of them off the top of your head, that would be great.
Announcing the Future Fund

2000 Center St
Ste 400
Berkeley, CA 94704

Announcing the Future Fund

Not from us, but please try to keep your answers brief. Not sure about Google!

1Gabby_O4mo
Thanks Nick!
Announcing the Future Fund

We don't know yet. We're going to see how this one goes and then decide after that.

Announcing the Future Fund

The first open call will end March 21. We'll probably have more calls for proposals in the future, but we’re really not sure when, and this will depend in part on how this experiment goes.

1Em_B4mo
Thank you for the information!
Announcing the Future Fund

Quick addition to this: For colleges and universities, indirect costs may not exceed 10% of the direct costs. On this front, Future Fund will mimic Open Philanthropy's indirect costs policy.


 

1Aleksandar Bogdanoski3mo
Thanks, this is very helpful, Nick! We're planning on submitting a proposal from UC Berkeley, however, our research administration team needs some info regarding the FTX Foundation, such as its address, complete name, and charitable status in the Bahamas. Could you share direct us to where we can find this information?
1Charles Tsai3mo
Hi Nick, Is the 10% limit just for colleges and universities? Or does it apply to other nonprofits as well? Thanks.
Announcing the Future Fund

Thank you! 

We definitely include non-human sentient beings as moral patients. Future Fund focuses on humanity in our writing because we think the human trajectory is the main factor we can influence in order to benefit both humans and non-humans in the long run.

Announcing the Future Fund

Yes, we're open to funding academic research relevant to our mission and/or areas of interest.

Announcing the Future Fund

No, funding applications will not be made public.

Announcing the Future Fund

Thanks for pointing this out; we didn’t know about this. I think the easiest solution would be for you to either (a) use a different google account or (b) create a new google account for this purpose.

You could also perhaps try not attaching any files and just spending us links to Google docs set to "anyone with the link can view."

Announcing the Future Fund

We’d be willing to fund professionals who submit an application directly.


 

Announcing the Future Fund

If we're funding a for-profit organization to do something profitable, we'd like to receive equity. If you can arrange for that, we're all set.


 

Announcing the Future Fund

FTX Foundation has funded some animal work in the past, and almost certainly will do so in the future. Future Fund won’t be funding animal welfare work except when we see a good case that it's one of the best ways to improve the longterm future. Basically James Ozden has it right.


 

Announcing the Future Fund

There aren’t restrictions on multiple discrete applications.


 

1Fielding Grasty4mo
Thank you, Nick.
Announcing the Future Fund

We have a more robust interest in neglected existential risks, such as AI and bio. However, we think the issues discussed in our economic growth section are good from a longtermist POV, and we'd like to see what ideas people put forward.

Our areas of interest aren’t in order of priority, and there's internal disagreement about the order of priority.

Announcing the Future Fund

Reasonable question! Our work is highly continuous with Open Phil’s work, and our background worldview is very similar. At the moment, we’re experimenting with our open call for proposals (combined with our areas of interest and project ideas) and a regranting program. We'll probably experiment with prizes this year, too. We're hoping these things will help us launch some new projects that wouldn't have happened otherwise.

I also endorse Jonas's answer that just having more grantmaking capacity in the area will probably be helpful as well.

The Future Fund’s Project Ideas Competition

Thanks so much for all of these ideas! Would you be up for submitting these as separate comments so that people can upvote them separately? We're interested in knowing what the forum thinks of the ideas people present.

4ThomasWoodside4mo
Some of this has been said in threads above, but I don't think that upvotes are a very good way of knowing what the forum thinks. People are definitely not reading this whole thread and the first posts they see will likely get all of their attention. On top of that, I do not expect forum karma to be a good indicator of much even in the best case. People tend to upvote what they can understand and what is interesting and useful to them. I suspect what the average EA forum user finds useful and interesting is probably only loosely related with what a large EA grantmaker should fund. For instance, in general good writing is a very good way to get upvotes, but that doesn't correlate much with the strength of the ideas presented.
1Zac Townsend4mo
Apologies. I tried. The forum definitely thinks I'm spamming it with fourteen comments, but we'll see how it goes.
Announcing the Future Fund

Good question! We've re-written the question to say:

"If you are launching a new organization (especially one less than 12 months old), please submit a link to a one-minute video (unlisted Youtube video). Please follow the Y Combinator application video guidelines: https://www.ycombinator.com/video/ "

Feel free to use your judgment about what would be informative for borderline cases!

Democratising Risk - or how EA deals with critics

Hi Carla and Luke, I was sad to hear that you and others were concerned that funders would be angry with you or your institutions for publishing this paper. For what it's worth, raising these criticisms wouldn't count as a black mark against you or your institutions in any funding decisions that I make. I'm saying this here publicly in case it makes others feel less concerned that funders would retaliate against people raising similar critiques. I disagree with the idea that publishing critiques like this is dangerous / should be discouraged.

+1, EA Funds (which I run) is interested in funding critiques of popular EA-relevant ideas.

+1 to everything Nick said, especially the last sentence. I'm glad this paper was published; I think it makes some valid points (which doesn't mean I agree with everything), and I don't see the case that it presents any risks or harms that should have made the authors consider withholding it. Furthermore, I think it's good for EA to be publicly examined and critiqued, so I think there are substantial potential harms from discouraging this general sort of work.

Whoever told you that funders would be upset by your publishing this piece, they didn't speak for Open Philanthropy. If there's an easy way to ensure they see this comment (and Nick's), it might be helpful to do so.

Thanks for saying this publically too Nick, this is helpful for anyone who might worry about funding. 

The EA Community and Long-Term Future Funds Lack Transparency and Accountability

Hi Evan, let me address some of the topics you’ve raised in turn.

Regarding original intentions and new information obtained:

  • At the time that the funds were formed, it was an open question in my mind how much of the funding would support established organizations vs. emerging organizations.
  • Since then, the things that changed were that EA Grants got started, I encountered fewer emerging organizations that I wanted to prioritize funding than expected, and Open Phil funding to established organizations grew more than I expected.
  • The three factors contribute
... (read more)
1Evan_Gaensbauer4y
[Part I of II] Thank you for your thoughtful response. As far as I'm concerned, these factors combined more than exonerate you from aspersions you were in acting in bad faith in the management of either these funds. For what it's worth, I apologize you've had to face such accusations in the comments below as a result of my post. I hoped for the contrary, as I consider such aspersions at best counterproductive. I expect I'll do a follow-up as a top-level post to the EA Forum, in which case I'll make abundantly clear I disbelieve you were acting in bad faith, and that, if anything, it's as I expected: what's happened is a result of the CEA failing to ensure you as a fund manager and the EA Funds were in sufficiently transparent and regular communication with the EA community, and/or donors to these funds. Personally, I disagree with a perspective the Long-Term and EA Community Funds should be operated differently than the other two funds, i.e., seeking to fund well-established as opposed to nascent EA projects/organizations. I do so while also agreeing it is a much better use of your personal time to focus on making grants to established organizations, and follow the cause prioritization/evaluation model you've helped develop and implement at Open Phil. I think one answer is for the CEA to hire or appoint new/additional fund managers for one or both of the Long-Term Future and EA Community Funds to relieve pressure on you to do everything, both dividing your time between the Funds and your important work at Open Phil less than now, and to foster more regular communication to the community regarding these Funds. While I know yourself and Benito commented it's difficult to identify someone to manage the funds both the CEA and EA community at large would considered qualified, I explained my conclusion in this comment [http://effective-altruism.com/ea/1qx/the_ea_community_and_far_future_ea_funds_are_not/f1z] as to why I think it's both important and tractable for us
The EA Community and Long-Term Future Funds Lack Transparency and Accountability

Thanks for sharing your concerns, Evan. It sounds like your core concerns relate to (i) delay between receipt and use of funds, (ii) focus on established grantees over new and emerging grantees, and (iii) limited attention to these funds. Some thoughts and comments on these points:

  • I recently recommended a series of grants that will use up all EA Funds under my discretion. This became a larger priority in the last few months due to an influx of cryptocurrency donations. I expect a public announcement of the details after all grant logistics have been comp

... (read more)

Hi Nick. Thanks for your response. I also appreciate the recent and quick granting of the EA Funds up to date. One thing I don't understand is why most of the grants you wanted to make could have been made by the Open Philanthropy Project, is why:

  • the CEA didn't anticipate this;
  • gave public descriptions of how the funds you managed would work to the contrary;
  • and why, if they learned of your intentions contrary to what they first told the EA community, they didn't issue an update.

I'm not aware of a public update of that kind. If there was a private e... (read more)

7Peter Wildeford4y
Hey Nick, I'm excited to hear you've made a bunch of grants. Do you know when they'll be publicly announced?
8Milan_Griffes4y
Could you sketch out what "suitable qualifications" for the fund manager role look like, roughly?
Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

In addition to, 35 days total. (I work at Open Phil.)

Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

I don't mean to make a claim re: averages, just relaying personal experience.

Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

I am a Program Officer at Open Philanthropy who joined as a Research Analyst about 3 years ago.

The prior two places I lived were New Brunswick, NJ and Oxford, UK. I live in a house with a few friends. It is 25-30m commute door-to-door via BART. My rent and monthly expenses are comparable to what I had in Oxford but noticeably larger than what I had in New Brunswick. I got pay increases when I moved to Open Phil, and additional raises over time. I’m comfortable on my current salary and could afford to get a single-bedroom apartment if I wanted, but I’m happy where I am.

Overall, I would say that it was an easy adjustment.

2Benjamin_Todd4y
Surely rent is much higher than Oxford on average? It's possible to get a great place in Oxford for under £700 per month, while comparable in SF would be $1300+. Food also seems about 30% more expensive, and in Oxford you don't have to pay for a commute. My overall guess is that $80k p.a. in SF is equivalent to about £40k p.a. in Oxford.
How important is marginal earning to give?

To avoid confusing people: my own annual contributions to charity are modest.

0RyanCarey7y
Wait, I meant Matt Wage. Why did I write Nick Beckstead???
Should we launch a podcast about high-impact projects and people?

You might consider having a look at http://www.flamingswordofjustice.com/ . It's a podcast of interviews with activists of various types (pretty left-wing). I've listened to a few episodes and found it interesting. It was the closest thing I could think of that already exists.

Open Thread

I would love to see some action in this space. I think there is a natural harmony between what is best in Christianity--especially regarding helping the global poor--and effective altruism.

One person to consider speaking with is Charlie Camosy, who has worked with Peter Singer in the past (see info here). A couple other people to consider talking with would be Catriona Mackay and Alex Foster.

Cosmopolitanism

One attractive feature about cosmopolitanism in contrast with impartial benevolence is that impartial benevolence is often associated with denying that loved ones and family members are worthy targets of special concern, whereas I don't think cosmopolitanism has such associations. Another is that I think a larger fraction of educated people already have some knowledge about cosmopolitanism.

Good policy ideas that won’t happen (yet)

Niel, thanks for writing up this post. I think it's really worthwhile for us to discuss challenges that we encounter while working on EA projects with the community.

I noticed that this link in this sentence is broken:

Creating more disaster shelters to protect against global catastrophic risks (too weird)

0Niel_Bowerman8y
Thanks Nick. There seems to be a problem with the way the forum currently references the effective-altruism.com URL. I've directed the link to the post on the trikeapps site as a temporary workaround. It may break once the problem with the effective-altruism.com URLs is fixed.
Conversation with Holden Karnofsky, Nick Beckstead, and Eliezer Yudkowsky on the "long-run" perspective on effective altruism

After thinking about this later, I noticed that one of my claims was wrong. I said:

> Though I’m not particularly excited about refuges, they might be a good test case. I think that if you had this 5N view, refuges would be obviously dumb but if you had the view that I defended in my dissertation then refuges would be interesting from a conceptual perspective.

But then I ran some numbers and this no longer seemed true. If you assumed a population of 10B, an N of 5, a cost of your refuge of $1B, that your risk of doom was 1%, and that your refuge could cut... (read more)

0Owen Cotton-Barratt8y
Thanks for this clarification. After reading the emails I wanted to make exactly this point! I do think that comparing how good saving a life today is compared to doing something like building bunkers to reduce risk really comes down to an understanding of the world today rather than an understanding of exactly how big the future might be (after you grant that it could be very big). Though choosing 5 as a multiplier looks rather low to me; I'd be happier with something up in 100-1000 range (and I wouldn't be surprised if my view of the correct figure to use there changes substantially in the future).
A relatively atheoretical perspective on astronomical waste

I think it's an open question whether "even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future." But I broadly agree with the other points. In a recent talk on astronomical waste stuff, I recommended thinking about AI in the category of "long-term technological/cultural path dependence/lock in," rather than the GCR category (though that wasn't the main point of the talk). Link here: http://www.gooddoneright.com/#!nick-beckstead/cxpp, see slide 13.

A relatively atheoretical perspective on astronomical waste

Re 1, yes it is philosophically controversial, but it also does speak to people with a number of different axiologies, as Brian Tomasik points out in another comment. One way to frame it is that it's doing what separability does in my dissertation, but noticing that astronomical waste can run without making assumptions about the value of creating extra people. So you could think of it as running that argument with one less premise.

Re 2, yes it pushes in an unbounded utility function direction, and that's relevant if your preferred resolution of Pascal's Mu... (read more)

0Owen Cotton-Barratt8y
Yes, I really like this work in terms of pruning the premises. Which is why I'm digging into how firm those premises really are (even if I personally tend to believe them). It seems like the principle of scale is in fact implied by separability. I'd guess it's rather weaker, but I don't know of any well-defined examples which accept scale but not separability. I do find your framing of 3 a little suspect. When we have a solid explanation for just why it's great in ordinary situations, and we can see that this explanation doesn't apply in strange situations, it seems like the extrapolation shouldn't get too much weight. Actually most of my weight for believing the principle of scale comes the fact that it's a consequence of separability. One more way the principle might break down: 4) You might accept the principle for helping people at a given time, but not as a way of comparing between helping people at different times. Indeed in this case it's not so clear most people would accept the small-scale version (probably because intuitions are driven by factors such as improving lives earlier gets more time to have indirect effects acting to improve lives later).
Will we eventually be able to colonize other stars? Notes from a preliminary review

I haven't done a calculation on that, but I agree it's important to consider. Regarding your calculation, a few of these factors are non-independent in a way that favors space colonization. Specifically:

Speeding up and slowing down are basically the same, so you should just treat that as one issue. Fitting everything you need into the spaceship and being able to build a civilization when you arrive are very closely related. Having your stuff survive the voyage and being able to build a civilization in a hostile environment are closely related. I would ... (read more)

Load More