Hi all - we’re the management team for the Long-Term Future Fund. This post is where we're hosting the AMA for you to ask us about our grant making, as Marek announced yesterday.

We recently made this set of grants (our first since starting to manage the fund), and are planning another set in February 2019. We are keen to hear from donors and potential donors about what kind of grant making you are excited about us doing, what concerns you may have, and anything in between.

Please feel free to start posting your questions from now. We will be available here and actively answering questions between roughly 2pm and 6pm PT (with some breaks) on December 20th. 

Please ask different questions in separate comments, for discussion threading.

edit: Exciting news! The EA Foundation has just told us that donations to the Long-Term Future are eligible for the matching drive they're currently running. See the link for details on how to get your donation matched.

edit 2: The "official" portion of the AMA has now concluded, but feel free to post more questions; we may be able to respond to them over the coming week or two. Thanks for participating!

Comments30
Sorted by Click to highlight new comments since: Today at 2:40 PM

Roughly how much time per month/year does each of the fund managers currently expect to spend on investigating, discussing, and deciding on grant opportunities?

Rough fermi on how much time I expect to spend on this:

I spent about 12 hours on the investigations of the last round, and about 2 hours a week since then on various smaller tasks and discussions about potential grantees. I spent less time than I wanted to since we made the first round of grants, so I expect to settle in at something closer to 3-4 hours per week. I expect this will be higher than the other fund members.

In the long-run I expect that the majority of time I spend on this will be in conversations with potential grantees and the other members of the fund, about models of impact of various types of projects, and long-term future strategy in general. I think I would find those conversations useful independently of my work for the fund, so if you are interested in net-costs you might want to weigh that only at 50% or so. Though I am not even sure whether the net-cost for me is negative since I expect the fund will be a good vehicle for me for me to have more focused conversations and modeling about the long-term future and I can’t come up with an obviously better vehicle to do so.

So, in terms of time I will spend doing work that is related to this fund, I expect to settle in around 3-4 hours per week, with peak periods of about 15 hours per week about 4 times a year. Though how much of that is counterfactual and whether I experience any net cost is unclear, probably at least 20% of that time is dead-loss in terms of boring logistical tasks and other things that I don’t expect to gain from much long term, but I also expect to gain at least some benefit from being on the fund and the opportunities for learning it will open up for me.

Is there anything the EA community can do to make it easier for yourself and other fund managers to spend more time as you'd like to on grantmaking decisions, especially executive time spent on the decision-making?

I'm thinking of stuff like the CEA allocating more staff or volunteer time to helping the EA Funds managers take care of lower-level, 'boring logistical tasks' that are part of their responsibilities, outsourcing some of the questions you might have to EA Facebook groups so you don't have to waste time doing internet searches anyone could do, etc. Stuff like that.

This is the sort of question I could easily spend a lot of time trying to forge a perfect answer to, so I’m going to instead provide a faster and probably less satisfying first try, and time permitting come back and clarify.

I see significant justification for the existence of the fund being pooling funds from several sources to justify more research than individual donors would justify given the size of their donations (there are other good reasons for the fund to exist; this is one of them). I’d like to have the expert team spend as much time as the size of each grant justifies. Given the background knowledge that the LTF Fund expert team has, the amount of time justified is going to vary by size of grant, how much we already know about the people involved, the project, the field they’re working in and many other factors.

So, (my faster and less satisfying answer:) I don’t know how much time we’re going to spend. More if we find granting opportunities from people we know less about, in fields we know less about; less if we find fewer of those opportunities and decide that more funds should go to more established groups.

I can say that while we were happy with the decisions we made, the team keenly felt the time pressure of our last (first) granting round, and would have liked more time than we had available (due to what seemed to me to be teething problems that should not apply to future rounds) to look into several of the grant applications we considered.

What do you mean by 'expert team' in this regard? In particular, if you consider yourself or the other fund managers to be experts, would you being willing to qualify or operationalize that expertise?

I ask because when the EA Fund management teams were first announced, there was a question about why there weren't 'experts' in the traditional sense on the team, i.e., what makes you think you'd be as good as managing the Long-Term Future Fund as a Ph.D. in AI, biosecurity, or nuclear security (assuming when we talk about 'long-term future' we mostly in practice mean 'existential risk reduction')?

I ask because when the new EA Funds management teams were announced, someone asked the same question, and I couldn't think of a very good answer. So I figure it'd be best to get the answer from you, in case it gets asked of any us again, which seems likely?

What generally is your criteria for evaluating opportunities?

I expect different people on the fund will have quite different answers to this, so here is my perspective:

I don’t expect to score projects or applications on any straightforward rubric any more than a startup VC should do so for the companies that they are investing in. Obviously, things like general competence, past track record, clear value proposition and neglectedness matter, but at large, I mostly expect to recommend grants based on my models of what is globally important, and on my expectation of whether the plan the grantee proposed will actually work, and do something that I guess you could call “model-driven granting”

What this means in practice is that I expect the things I look for in a potential grantee to differ quite a bit depending on what precisely they are planning to do with the resources. I expect there will be many applicants that will display strong competence and rationality, but are running on assumptions that I don’t share, or are trying to solve problems that I don’t think are important, and I don’t plan to make recommendations unless my personal models expect that the plan the grantee is pursuing will actually work. This obviously means I will have to invest significant time and resources to actually understand what the grantees are trying to achieve, which I am currently planning to make room for.

I can imagine some exceptions to this though. I think we will run across potential grantees who are asking for money mostly to increase their own slack, and who have a past track record of doing valuable work. I am quite open to grants like this, think they are quite valuable and expect to give out multiple grants in this space (barring logistical problems of doing so). In that case, I expect to mostly ask myself the question of whether I expect additional slack and freedom would make a large difference in that person’s output, which I expect will again differ quite a bit from person to person.

One other type of grant that I am open to are rewards for past impact. I think rewarding people for past good deeds is quite important for setting up long-term incentives, and evaluating whether an intervention had a positive impact is obviously a lot easier after the project is completed than before it is completed. In this case I again mostly expect to rely heavily on my personal models of whether the completed project had a significant positive impact, and will base my recommendations on that estimate.

I think this approach will sadly make it harder for potential grantees to evaluate whether I am likely to recommend them for a grant, but I think is less likely to give rise to various goodharting and prestige-optimization problems, and will allow me to make much more targeted grants than the alternative of a more rubric-driven approach. It’s also really the only approach that I expect will cause me to learn what interventions work and don’t work in the long-run, by exposing my models to the real world and seeing whether my concrete predictions of how various projects will go come true or not.

This is also broadly representative of how I think about evaluating opportunities.

I also think this sort of question might be useful to ask on a more individual basis - I expect each fund manager to have a different answer to this question that informs what projects they put forward to the group for funding, and which projects they'd encourage you to inform them about.

This post contains an extensive discussion on the difficulty of evaluating AI charities because they do not share all of their work due to info hazards (in the "Openness" section as well as the MIRI review). Will you have access to work that is not shared with the general public, and how will you approach evaluating research that is not shared with you or not shared with the public?

We won’t generally have access to work that isn’t shared with the general public, but may incidentally have access to such work through individual fund members having private conversations with researchers. Thus far, we’ve evaluated organizations based on the quality of their past research and the quality of their team.

We may also evaluate private research by evaluating the quality of its general direction, and the quality of the team pulling it off. For example, I think the discourse around AI safety could use a lot of deconfusion. I also recognize that such deconfusion could be an infohazard, but nevertheless want such research to be carried out, and think MIRI is one of the most competent organizations around to do it.

In the event that our decision for whether to fund an organization hinges on the content of their private research, we’ll probably reach out to them and ask them if they’re willing to disclose it.

Are there any organisations you investigated and found promising, but concluded that they didn't have much room for extra funding?

In the last grant round AI Impacts was an organization whose work I was excited about, but that currently seemed to not have significant room for extra funding. (If anyone from AI Impacts disagrees with this, please comment and let me know otherwise! )

Under what conditions would you consider making a grant directed towards catastrophic risks other than artificial intelligence?

We’re absolutely open to (and all interested in) catastrophic risks other than artificial intelligence. The fund is the long term future fund, and we believe that catastrophic risks are highly relevant to our long term future.

Trying to infer the motivation for the question I can add that in my own modelling getting AGI right seems highly important, and is the thing I’m most worried about, but I’m far from certain that another of the catastrophic risks we face won’t be catastrophic enough to threaten our existence or to delay progress toward AGI until civilisation recovers. I expect that the fund will make grants to non-AGI risk reduction projects.

If the motivation for the question is more how we will judge non-AI projects, see Habryka’s response for a general discussion of project evaluation.

Do you plan to continue soliciting projects via application? How else do you plan to source projects? What do you think distinguishes you from EA Grants?

What do you think distinguishes you from EA Grants?

I think we will have significant overlap with EA Grants in the long-term future domain, and I don’t think that’s necessarily a bad thing. I think EA Grants is trying to do something very broad, and having more people with different models in this space is valuable and I expect will significantly increase the number of good grants that can be awarded in this space.

I do think that the EA Grants team has a comparative advantage at funding small grants for individuals (we can’t give away grants below $10k), and applications that are more uniquely related to EA Movement building, or personal skill development to enable later direct EA work.

In general, if there is an overlap, I think I would encourage people to apply to EA Grants first, though applying to both also seems fine, in which case we would try to coordinate with the EA Grants team about the applications and try to determine who is better suited to evaluate the application.

I also expect that a large part of our granting work will be proactive, by trying to encourage people to start specific projects, or encourage specific existing organizations and projects to scale up, which I expect to not overlap that much with the work that EA Grants is doing.

(That’s two questions, Peter. I’ll answer the first and Oli the second, in separate comments for discussion threading.)

Do you plan to continue soliciting projects via application? How else do you plan to source projects?

Yes, we do plan to continue soliciting projects via application (applicants can email us at ealongtermfuture@gmail.com). We also all travel in circles that expose us to granting-suitable projects. Closer to our next funding round (February) we will more actively seek applications.

I'd be interested to know, if it can be disclosed publicly, whether non-advisor team members also control alternative pots of discretionary funding.

I am in contact with a couple of other funding sources who would take recommendations from me seriously, but this fund is the place I have most direct control over.

Both Matts are long-time earn-to-givers, so they each make grants/donations from their own earnings as well as working with this fund.

I personally fund some things that are too small and generally much too weird for anyone else to fund, but besides that, I don’t control any alternative pots of discretionary funding.

(Can't speak for everyone, so just answering for me) I do not control alternative pots of discretionary funding, besides the budget of the LessWrong organization. I can imagine running into some overlap of things I might want to fund with the LessWrong budget and things I would want to recommend a grant for, mostly in the space of rewarding research output on LessWrong and the EA Forum, but overall I expect the overlap to be quite minor.

I apologise that these questions are late, but hope based on edit 2 above that they can sneak in under the wire. I have three questions.

1. I believe that EA funds has been running for a while now, and that the activities and donations of the Giving What We Can Trust were transferred to EA Funds on or around 20 June 2017. I note that the combined total of grants to date and fund balance for the LTFF is around $1.18m. That doesn't seem like much money. What do you anticipate annual donations being over the next few years?

2. The fund has 5 managers and 2 advisers. If the managers spend 150 hours a year each on average (consistent with Habryka's response below), that will be 750 person-hours per year. Valuing their time at $300/hour suggests the cost of running LTFF might be $225k pa, which might be a quarter of the amount being granted. How is this being funded? Are the fund managers being paid and if so by whom? If they're donating their time, are they happy that this is a better use of their time than ETG?

3. Why did LTFF make no grants between March 2017 and July 2018?

What are your thoughts on funding smaller "start-up" organizations (e.g., Ozzie's project) versus larger "established" organizations (e.g., MIRI)?

We’re open to both. My personal opinion is that there are some excellent existing orgs in the long term future space, and that that’s the effectiveness hurdle that smaller projects have to get over to justify funding, but that there are many smaller things that should be done that don’t require as much funding as the existing larger orgs, and their smaller funding needs can have a higher expected value than those marginal dollars going to one of the existing orgs. I expect our future funding to split 50-70% larger orgs / 30-50% smaller projects (note Habryka's different estimate of the split).

Open to both. We’ve funded both of your examples in our last round. I generally have a sense that funding smaller projects is currently undersupplied and I am more excited about our grants to small projects than our grants to more established ones, though I do think there are a lot of excellent organizations in the space. I overall expect we have about a 50/50 funding split between smaller start-up projects and larger organizations.

What new research would be helpful to finding and/or evaluating opportunities?

I have a bunch of thoughts, but find it hard to express them without any specific prompt. In general, I find a lot of AI Alignment research valuable, since it helps me evaluate other AI Alignment research, but I guess that’s kind of circular. I haven’t found most broad cause-prioritization research particularly useful for me, but would probably find research into better decision making as well as the history of science useful for helping me make better decision (i.e. rationality research).

I’ve found Larks recent AI Alignment literature and organization review quite useful, so more of that seems great. I’ve also found some of Shahar Avin’s thoughts on scientific funding interesting, but don’t really know whether it’s useful. I generally think a lot of Bostrom’s writing has been very useful to me, so more of that type seems good, though I am not sure how well others can do the same.

Not sure how useful this is or how much this answers your question. Happy to give concrete comments on any specific research direction you might be interested in getting my thoughts on.

Hi! Do you happen to know about the current AI Impacts hiring process?

I don’t think any of us have any particular expertise on this question. You could try sending an application on their jobs page.