M

matt

17 karmaJoined Feb 2016

Bio

I'm Chair of the EA Funds Long Term Future Fund, a cofounder of Bellroy (a B Corp) and founder of Trike Apps (which launched and hosted the original LessWrong.com and EA Forum websites and hosts a number of other EA and Rationalsphere sites). I've been involved in growing the EA movement since 2012, providing regular advice on organizational management and implementation to EA teams. I studied engineering, history and philosophy at the University of Melbourne. I currently split my time between MIRI in Berkeley and my home in Melbourne, Australia.

Comments
5

(That’s two questions, Peter. I’ll answer the first and Oli the second, in separate comments for discussion threading.)

Do you plan to continue soliciting projects via application? How else do you plan to source projects?

Yes, we do plan to continue soliciting projects via application (applicants can email us at ealongtermfuture@gmail.com). We also all travel in circles that expose us to granting-suitable projects. Closer to our next funding round (February) we will more actively seek applications.

We’re absolutely open to (and all interested in) catastrophic risks other than artificial intelligence. The fund is the long term future fund, and we believe that catastrophic risks are highly relevant to our long term future.

Trying to infer the motivation for the question I can add that in my own modelling getting AGI right seems highly important, and is the thing I’m most worried about, but I’m far from certain that another of the catastrophic risks we face won’t be catastrophic enough to threaten our existence or to delay progress toward AGI until civilisation recovers. I expect that the fund will make grants to non-AGI risk reduction projects.

If the motivation for the question is more how we will judge non-AI projects, see Habryka’s response for a general discussion of project evaluation.

We’re open to both. My personal opinion is that there are some excellent existing orgs in the long term future space, and that that’s the effectiveness hurdle that smaller projects have to get over to justify funding, but that there are many smaller things that should be done that don’t require as much funding as the existing larger orgs, and their smaller funding needs can have a higher expected value than those marginal dollars going to one of the existing orgs. I expect our future funding to split 50-70% larger orgs / 30-50% smaller projects (note Habryka's different estimate of the split).

This is the sort of question I could easily spend a lot of time trying to forge a perfect answer to, so I’m going to instead provide a faster and probably less satisfying first try, and time permitting come back and clarify.

I see significant justification for the existence of the fund being pooling funds from several sources to justify more research than individual donors would justify given the size of their donations (there are other good reasons for the fund to exist; this is one of them). I’d like to have the expert team spend as much time as the size of each grant justifies. Given the background knowledge that the LTF Fund expert team has, the amount of time justified is going to vary by size of grant, how much we already know about the people involved, the project, the field they’re working in and many other factors.

So, (my faster and less satisfying answer:) I don’t know how much time we’re going to spend. More if we find granting opportunities from people we know less about, in fields we know less about; less if we find fewer of those opportunities and decide that more funds should go to more established groups.

I can say that while we were happy with the decisions we made, the team keenly felt the time pressure of our last (first) granting round, and would have liked more time than we had available (due to what seemed to me to be teething problems that should not apply to future rounds) to look into several of the grant applications we considered.

Hi Dunja, I'm Matt Fallshaw, Chair of the fund. This response is an attempt to be helpful, but I'm not entirely sure what, in answer to your question, would qualify as a qualification; perhaps it's relevant that I've been following the field for over 10 years, I've been an advisor to MIRI (I joined their Board of Directors in 2014 (a position I recently had to give up) and currently spend approaching half of my time working on MIRI projects) and I'm an advisor to BERI. I chose the expert team (in consultation with Marek Duda), and I chose them for (among other things) their intelligence, knowledge and connections (to both advisors and likely grantee orgs or individuals). We absolutely do intend to consult with experts (including Nick and Jonas, our listed advisors, and outside experts) when we don't feel that we have enough knowledge ourselves to properly assess a grant. Our connections span multiple continents and (when we don't feel qualified ourselves) we will choose advisors relevant to each grant we consider. … I'm not sure whether that response is going to be satisfying, so feel free to clarify your question and I'll try again.