All of AdamGleave's Comments + Replies

This is an important point. There's a huge demand for research leads in general, but the people hiring & funding often have pretty narrow interests. If your agenda is legibly exciting to them, then you're in a great position. Otherwise, there can be very little support for more exploratory work. And I want to emphasize the legible part here: you can do something that's great & would be exciting to people if they understood it, but novel research is often time-consuming to understand, and these are time-constrained people who will not want to invest... (read more)

As someone who did recently set up an AI safety lab, success rates have certainly been on my mind. It's certainly challenging, but I think the reference class we're in might be better than it seems at first.

I think a big part of what makes succeeding as a for-profit tech start-up challenging is that so many other talented individuals are chasing the same, good ideas. For every Amazon there are 1000s of failed e-commerce start-ups. Clearly, Amazon did something much better than the competition. But what if Amazon didn't exist? What if there was a company th... (read more)

Thanks Lucretia for sharing your experience. This cannot have been an easy topic to write about, and I'm deeply sorry these events happened to you. I really appreciated the clarity of the post and found the additional context above the TIME article to be extremely valuable.

I liked your suggestions for people to take on an individual level. Building on the idea of back channel references, I'm wondering if there's value in having a centralised place to collect and aggregate potential red flags? Personal networks can only go so far, and it's often useful to d... (read more)

5
Lucretia
10mo
Yes, there are some efforts to build infra like this! I'm not sure any have launched yet. Some people have been looking into blockchain as a decentralized way to store and add information. There are also some great FB groups, but they frequently get shut down by some people who report them to moderators. I do think the Bay Area startup world should have its equivalent to CEA, but it would require time, dedication, smart incentive design, and funding. Right now, Silicon Valley seems too libertarian/internally competitive for this to happen by default.

For people not familiar with the UK, the London metropolitan area houses 20% of the UK's population, and a disproportionate share of the economic and research activity. The London-Cambridge-Oxford triangle in particular is by far the research powerhouse of the country, although there are of course some good universities elsewhere (e.g. Durham, St Andrews in the north). Unfortunately, anywhere within an hour's travel of London is going to be expensive. Although I'm sure you can find somewhat cheaper options than Oxford, I expect the cost savings will be mod... (read more)

7
projectionconfusion
1y
You can get to Luton, Milton Keynes, Stevenage or a number of other small London satellite towns in less than 2 hours from Oxford, and less than 1 from central London. These are all pretty banal collections of concrete buildings, but would allow you to buy a venue for a fraction of the cost. It seems hard to escape the conclusion that this decision was mainly made based on a Manor house in Oxford being more aesthetically appealing than a concrete office building on an industrial estate or small town centre.
3
lastmistborn
1y
Thank you, this is a good part of what I wanted to know.

A 30 person office could not house the people attending, so you'd need to add costs of a hotel/AirBnB/renting nearby houses if going down that option. Even taking into account that commercial rest estate is usually more expensive than residential, I'd expect the attendee accommodation cost to be greater than the office rental simply because people need more living space than they do conference space.

Additionally in my experience retreats tend to go much better if everyone is on-site in one location: it encourages more spontaneous interaction outside of the... (read more)

8
Caro
1y
I agree with Adam here about the fact that it's better to host all attendees in one place during retreats. However, I am not sure of the number of bedrooms that Wytham has. It could be that a lot of attendees have to rent bedrooms outside of Wytham anyways, which makes the deal worse.

Note I don't see any results for FTX Foundation or FTX Philanthropy at https://apps.irs.gov/app/eos/determinationLettersSearch So it's possible it's not a 501(c)(3) (although it could still be a non-profit corporation).

3
abrahamrowe
1y
I've noticed that it takes new orgs up to a year to show up in that search, so it might also be that they've applied for or gotten the status recently (given that FTX stuff was so new). Delaware corporation search suggests they are registered as a nonprofit corporation in Delaware - https://icis.corp.delaware.gov/ecorp/entitysearch/NameSearch.aspx, (have to search them by name). 

Disclaimer: I do not work for FTX, and am basing this answer off publicly available information, which I have not vetted in detail.

Nick Beckstead in the Future Fund launch post described several entities (FTX Foundation Inc, DAFs) that funds will be disbursed out of: https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1?commentId=qtJ7KviYxWiZPubtY I would expect these entities to be sufficiently capitalized to provide continuity of operations, although presumably it'll have a major impact on their long-run scale.

IANAL but... (read more)

Hi Aaron, thanks for highlighting this. We inadvertently published an older version of the write-up before your feedback -- this has been corrected now. However, there are still a number of areas in the revised version which I expect you'll still take issue with, so I wanted to share a bit of perspective on this. I think it's excellent you brought up this disagreement in a comment, and would encourage people to form their own opinion.

First, for a bit of context, my grant write-ups are meant to accurately reflect my thought process, including any reservatio... (read more)

6
aaronhamlin
2y
Hi Adam, I think your response fairly addresses the concerns I initially raised, and I appreciate your effort there. Thank you for the delicate response.

You could argue from a "flash of insight" and scientific paradigm shifts generally giving rise to sudden progress. We certainly know contemporary techniques are vastly less sample and compute efficient than the human brain -- so there does exist some learning algorithm much better than what we have today. Moreover there probably exists some learning algorithm that would give rise to AGI on contemporary (albeit expensive) hardware. For example, ACX notes there's a supercomputer than can do $10^17$ FLOPS vs the estimated $10^16 needed for a human brain. Thes... (read more)

I agree with a lot of this post. In particular, getting more precision in timelines is probably not going to help much with persuading most people, or in influencing most of the high-level strategic questions that Miles mentions. I also expect that it's going to be hard to get much better predictions than we have now: much of the low-hanging fruit has been plucked. However, I'd personally find better timelines quite useful for prioritizing my technical research agenda problems to work on. I might be in a minority here, but I suspect not that small a one (s... (read more)

8
Guy Raveh
2y
Is there an argument for a <10 years timeline that doesn't go directly through the claim that it's going to be achieved in the current paradigm?

Thanks for the post! This seems like a clearly important and currently quite neglected area and I'd love to see more work on it.

My current hot-take is that it seems viable to make AGI research labs a sufficiently hardened target that most actors cannot exploit them. But I don't really see a path to preventing the most well-resourced state actors from at least exfiltrating source code. There's just so many paths to this: getting insiders to defect, supply chain attacks, etc. Because of this I suspect it'll be necessary to get major state actors to play ball... (read more)

3
Jeffrey Ladish
2y
I think it's an open question right now. I expect it's possible with the right resources and environment, but I might be wrong. I think it's worth treating as an untested hypothesis ( that we can secure X kind of system for Y application of resources ), and to try to get more information to test the hypothesis. If AGI development is impossible to secure, that cuts off a lot of potential alignment strategies. So it seems really worth trying to find out if it's possible.

Making bets on new ambitious projects doesn't seem necessarily at odds with frugality: you can still execute on them in a lean way, some things just really do take a big CapEx. Granted whether Google or any major tech company really does this is debatable, but I do think they tend to at least try to instill it, even if there is some inefficiency e.g. due to principal-agent problems.

Thanks for writing this post, this is an area I've also sometimes felt concerned about so it's great to see some serious discussion.

A related point that I haven't seen called out explicitly is that monetary costs are often correlated with other more significant, but less visible, costs such as staff time. While I think the substantial longtermist funding overhang really does mean we should spend more money, I think it's still very important that we scrutinize where that money is being spent. One example that I've seen crop up a few time is retreats or othe... (read more)

3
evhub
2y
Google, by contrast, is notoriously the opposite—for example emphasizing just trying lots of crazy, big, ambitious, expensive bets (e.g. their "10x" philosophy). Also see how Google talked about frugality in 2011.

I think it's important to distinguish people's expectations and the reality of what gets rewarded. Both matter: if people expect something to be unrewarding, they won't do it even if it would be appreciated; and perhaps even worse, if people expect to get rewarded for something but in fact there is limited support, they may waste time going down a dead end.

Another axis worth thinking about is what kind of rewards are given. The post prompts for social rewards, but I'm not sure why we should focus on this specifically: things like monetary compensation, wor... (read more)

3
MattBall
2y
Usually when I see a comment this long, it is someone trying to show off / hijack a thread. But this comment is actually very useful. Thanks Adam!
2
Jan-Willem
2y
Wow! Spot-on Adam, I wanted to respond to this question but no need to anymore after reading this

That being said, we might increase our funding threshold if we learn that few grants have been large successes, or if more funders are entering the space.

My intuition is that more funders entering the space should lower your bar for funding, as it'd imply there's generally more money in this space going after the same set of opportunities. I'm curious what the reasoning behind this is, e.g. unilateralist curse considerations?

4
MichaelA
2y
My guess is that it's mostly that more funders being in the space increases the chance that good things get funded even if EAIF doesn't fund them, thus reducing the cost of false negatives (i.e., EAIF rejecting things that in reality should've been funded), thus reducing the cost of raising the bar. (But that's just a guess.)

First of all, I'm sorry to hear you found the paper so emotionally draining. Having rigorous debate on foundational issues in EA is clearly of the utmost importance. For what it's worth when I'm making grant recommendations I'd view criticizing orthodoxy (in EA or other fields) as a strong positive so long as it's well argued. While I do not wholly agree with your paper, it's clearly an important contribution, and has made me question a few implicit assumptions I was carrying around.

The most important updates I got from the paper:

  1. Put less weight on techn
... (read more)
5
anonymousEA
2y
I think you switched the two by accident Otherwise an excellent comment even if I disagree with most of it, have an updoot

Since writing this post, I have benefited both from 4 years of hindsight, and also significantly more grantmaking experience with just over a year at the long-term future fund. My main updates:

  • Exploit networks: I think small individual donors are often best off donating to people in their network that larger donors don't have access to. In particular I 70% believe it would have been better for me to wait 1-3 years and donate the money to opportunities as and when they came up. For example, there have been a few cases where something would help CHAI but c
... (read more)

Thanks for raising this. I think we communicated the grant decision to Po-Shen in late March/early April, when the pandemic was still significantly more active in the US. I was viewing this as "last chance to trial this idea", and I think I still stand by that given what we knew at the time, although I'd be less excited by this with the benefit of hindsight (the pandemic has developed roughly along my median expectations, but that still means I put significant credence on case rates being much higher than they currently are.)

In general our grant write-ups ... (read more)

Sorry for the (very) delayed reply here. I'll start with the most important point first.

But compared to working with a funder who, like you, wants to solve the problem and make the world be good, any of the other institutions mentioned including academia look extremely misaligned.

I think overall the incentives set up by EA funders are somewhat better than run-of-the-mill academic incentives, but I think the difference is smaller than you seem to believe, and I think we're a long way from cracking it. I think this is something we can get better at, but ... (read more)

If you mean that once you are on the Junior Faculty track in CS, you don't really need to worry about well-received publications, this is interesting and doesn't line up with my models. Can you think of any examples which might help illustrate this?

To clarify, I don't think tenure is guaranteed, more that there's significant margin of error. I can't find much good data on this, but this post surveys statistics gathered from a variety of different universities, and finds anywhere between 65% of candidates get tenure (Harvard) to 90% (Cal State, UBC). Inf... (read more)

Publishing good papers is not the problem, deluding yourself is.

Big +1 to this. Doing things you don't see as a priority but which other people are excited about is fine. You can view it as kind of a trade: you work on something the research community cares about, and the research community is more likely to listen on (and work on) things you care about in the future.

But to make a difference you do eventually need to work on things that you find impactful, so you don't want to pollute your own research taste by implicitly absorbing incentives or others opinions unquestioningly.

You approximately can't get directly useful/ things done until you have tenure.

At least in CS, the vast majority of professors at top universities in tenure-track positions do get tenure. The hardest part is getting in. Of course all the junior professors I know work extremely hard, but I wouldn't characterize it as a publication rat race. This may not be true in other fields and outside the top universities.

The primary impediment to getting things done that I see is professors are also working as administrator and teaching, and that remains a problem post-tenure.

3
eca
3y
This is interesting and also aligns with my experience depending on exactly what you mean! * If you mean that it seems less difficult to get tenure in CS (thinking especially about deep learning) than the vibe I gave, (which is again speaking about the field I know, bioeng) I buy this strongly. My suspicion is that this is because relative to bioengineering, there is a bunch of competition for top research talent by industrial AI labs. It seems like even the profs who stay in academia also have joint appointment in companies, for the most part. There isn't an analogous thing in bio? Pharma doesn't seem very exciting and to my knowledge doesn't have a bunch of PI-driven basic research roles open. Maybe bigtech-does-bio labs like Calico will change this in the future? IMO this doesn't change my core point because you will need to change your agenda some, but less than in biology. * If you mean that once you are on the Junior Faculty track in CS, you don't really need to worry about well-received publications, this is interesting and doesn't line up with my models. Can you think of any examples which might help illustrate this? I'd be looking for, e.g., recently appointed CS faculty at a good school pursuing a research agenda which gets quite poor reception/ crickets, but this faculty is still given tenure. Possibly there are some examples in AI safety before it was cool? Folks that come to mind mostly had established careers. Another signal would be less of the notorious "tenure switch" where people suddenly change their research direction. I have not verified this, but there is a story told about a Harvard Econ professor who did a bunch of centrist/slightly conservative mathematical econ who switched to left-leaning labor economics after tenure.
2
Adrià Garriga Alonso
3y
I don't see how this is a counterargument. Do you mean to say that, once you are on track to tenure, you can already start doing the high-impact research? It seems to me that, if this research is too diverged from the academic incentives, then our hypothetical subject may become one of these rare cases of CS tenure-track faculty that does not get tenure.

One important factor of a PhD that I don't see explicitly called out in this post is what I'd describe as "research taste": how to pick what problems to work on. I think this is one of if not the most important part of a PhD. You can only get so much faster at executing routine tasks or editing papers. But the difference between the most and mediam importance research problems can be huge.

Andrej Karpathy has a nice discussion of this:

When it comes to choosing problems you’ll hear academics talk about a mystical sense of “taste”. It’s a real thing. When y

... (read more)
1
eca
3y
Taste is huge! I was trying to roll this under my "Process" category, where taste manifests in choosing the right project, choosing the right approach, choosing how to sequence experiments, etc etc. Alas, not a lossless factorization These exercises look quite neat, thanks for sharing!

You can get research taste by doing research at all, it doesn't have to be a PhD. You may argue that PIs have very good research taste that you can learn from. But their taste is geared towards satisfying academic incentives! It might not be good taste for what you care about.  As Chris Olah points out, "Your taste is likely very influenced by your research cluster".

Thanks for writing this post, it's always useful to hear people's experiences! For others considering a PhD, I just wanted to chime in and say that my experience in a PhD program has been quite different (4th year PhD in ML at UC Berkeley). I don't know how much this is the field, program or just my personality. But I'd encourage everyone to seek a range of perspectives: PhDs are far from uniform.

I hear the point about academic incentives being bad a lot, but I don't really resonate with it. A summary of my view is that incentives are misaligned everywhere... (read more)

5
eca
3y
This is an excellent comment, thanks Adam. A couple impressions: * Totally agree there are bad incentives lots of places * I think figuring out what existing institutions have incentives that best serve your goals, and building a strategy around those incentives, is a key operation. My intent with this article was to illustrate some of that type of thinking within planning for gradschool. If I was writing a comparison between working in academia and other possible ways to do research I would definitely have flagged the many ways academic incentives are better than the alternatives! I appreciate you doing that, because it's clearly true and important. * In that more general comparison article, I think I may have still cautioned about academic incentives in particular. Because they seem, for lack of a better word, sneakier? Like, knowing you work at a for-profit company makes it really transparently clear that your manager (or manager's manager's) incentives are different from yours, if you want to do directly impactful research. Whereas I've observed folks, in my academic niche of biological engineering, behave as if they believe a research project to be directly good when I (and others) can't see the impact proposition, and the behavior feels best explained by publishing incentives? In more extreme cases, people will say that project A is less important to prioritize than project B because B is more impactful, but will invest way more in A (which just happens to be very publishable). I'm sure I'm also very guilty of this, but its easier to recognize in other people :P -I'm primarily reporting on biology/ bioengineering/ bioinformatics academia here, though consume a lot of deep learning academias output. FWIW, my sense is there is actually a difference in the strength and type of incentives between ML and biology, at least. From talking with friends in DL academic labs, it seems like there is still a pressure to publish in conferences but there are also lots of

Thanks for picking up the thread here Asya! I think I largely agree with this, especially about the competitiveness in this space. For example, with AI PhD applications, I often see extremely talented people get rejected who I'm sure would have got an offer a few years ago.

I'm pretty happy to see the LTFF offering effectively "bridge" funding for people who don't quite meet the hiring bar yet, but I think are likely to in the next few years. However, I'd be hesitant about heading towards a large fraction of people working independently long-term. I think t... (read more)

8
Linda Linsefors
3y
I claim we have proof of concept. The people who started the existing AI Safety research orgs did not have AI Safety mentors. Current independent researcher have more support than they had. In a way an org is just a crystalized collaboration of previously independent researchers.  I think that there are some PR reasons why it would be good if most AI Safety researchers where part of academia or other respectable orgs (e.g. DeepMind). But I also think it is good to have a minority of researchers who are disconnected from the particular pressures of that environment. However, being part of academia is not the same as being part of an AI Safety org. MIRI people are not part of academia, and someone doing AI Safety research as part of their PhD in a "normal" (not AI Safety focused) PhD program, is sorta an independent researcher.   We are working on that. I'm not optimistic about current orgs keeping up with the growth of the field, and I don't think it is healthy for the career to be too competitive, since this will lead to goodhearted on career intensives. But I do think a looser structure, built on personal connections rather than formal org employment, can grow in a much more flexible way, and we are experimenting with various methods to make this happen.

This is an important question. It seems like there's an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact, regardless of if we can explain the grant. This is a reasonable position -- and thank you for your confidence! -- however I think the fund being legible does have some significant advantages:

  1. Accountability generally seems to improve organisations functioning. It'd be surprising if the LTFF was a complete exception to this, and legibility seems n
... (read more)
2
Linch
3y
To be clear I think this is not my all-things-considered position. Rather, I think this is a fairly significant possibility, and I'd favor an analogue of Open Phil's 50/40/10 rule  (or something a little more aggressive) than to eg whatever the socially mediated equivalent of full discretionary control by the specific funders would. be.  This seems like a fine compromise that I'm in the abstract excited about, though of course it depends a lot on implementation details. This is really good to hear!
8
MaxRa
3y
Re: Accountability I’m not very familiar with the funds, but wouldn’t retrospective evaluations like Linch‘s be more useful than legible reasoning? I feel like the grantees and institutions like EA funds with sufficiently long horizons want to stay trusted actors in the longer run and so are sufficiently motivated to be trusted with some more inside-view decisions. * trust from donors can still be gained by explaining a meaningful fraction of decisions * less legible bets may have higher EV * I imagine funders will always be able to meaningfully explain at least some factors that informed them, even if some factors are hard to communicate * some donors may still not trust judgement sufficiently * maybe funded projects have measurable outcomes only far in the future (though probably there are useful proxies on the way) * evaluation of funded projects takes effort (but I imagine you want to do this anyway)
3
Habryka
3y
(Looks like this sentence got cut off in the middle) 

Could you operationalize "more accurately" a bit more? Both sentences match my impression of the fund. The first is more informative as to what our aims are, the second is more informative as to the details of our historical (and immediate future) grant composition.

My sense is that the first will give people an accurate predictive model of the LTFF in a wider range of scenarios. For example, if next round we happen to receive an amazing application for a new biosecurity org, the majority of the round's funding could go on that. The first sentence would pre... (read more)

This is a good point, and I do think having multiple large funders would help with this. If the LTFF's budget grew enough I would be very interested in funding scalable interventions, but it doesn't seem like our comparative advantage now.

I do think possible growth rates vary a lot between fields. My hot take is new research fields are particularly hard to grow quickly. The only successful ways I've seen of teaching people how to do research involve apprenticeship-style programs (PhDs, residency programs, learning from a team of more experienced researcher... (read more)

5
Ozzie Gooen
3y
I agree that research organizations of the type that we see are particularly difficult to grow quickly. My point is that we could theoretically focus more on other kinds of organizations that are more scalable. I could imagine there being more scalable engineering-heavy or marketing-heavy paths to impact on these problems. For example, setting up an engineering/data organization to manage information and metrics about bio risks. These organizations might have rather high upfront costs (and marginal costs), but are ones where I could see investing $10-100mil/year if we wanted.  Right now it seems like our solution to most problems is "try to solve it with experienced researchers", which seems to be a tool we have a strong comparative advantage in, but not the only tool in the possible toolbox. It is a tool that's very hard to scale, as you note (I know of almost no organizations that have done this well).    Separately, I just want to flag that I think I agree, but also feel pretty bad about this. I get the impression that for AI many of the grad school programs are decent enough, but for other fields (philosophy, some of Econ, things bio related), grad school can be quite long winded, demotivating, occasionally the cause of serious long term psychological problems, and often distracting or actively harmful for alignment. It definitely feels like we should eventually be able to do better, but it might be a while. 

What types/lines of research do you expect would be particularly useful for informing the LTFF's funding decisions?

I'd be interested in better understanding the trade-off between independent vs established researchers. Relative to other donors we fund a lot of independent research. My hunch here is that most independent researchers are less productive than if they were working at organisations -- although, of course, for many of them that's not an option (geographical constraints, organisational capacity, etc). This makes me place a bit of a higher bar ... (read more)

About 40%. This is including startups that later get acquired, but the parent company would not have been the first to develop transformative AI if the acquisition had not taken place. I think this is probably my modal prediction: the big tech companies are effectively themselves huge VCs, and their infrastructure provides a comparative advantage over a startup trying to do it entirely solo.

I think I put around 40% on it being a company that does already exist, and 20% on "other" (academia, national labs, etc).

Conditioning on transformative AI being develo... (read more)

6
Linch
3y
Thanks a lot, really appreciate your thoughts here!

I think the long-termist and EA communities seem too narrow on several important dimensions:

  • Methodologically there are several relevant approaches that seem poorly represented in the community. A concrete example would be having more people with a history background, which seems critical for understanding long-term trends. In general I think we could do better interfacing with the social sciences and other intellectual movements.

    I do think there are challenges here. Most fields are not designed to answer long-term questions. For example, history is ofte

... (read more)
7
Jonas V
3y
(As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds.) I agree with Adam and Asya. Some quick further ideas off the top of my head: * More academic teaching buy-outs. I think there are likely many longtermist academics who could get a teaching buy-out but aren’t even considering it. * Research into the long-term risks (and potential benefits) of genetic engineering. * Research aimed at improving cause prioritization methodology. (This might be a better fit for the EA Infrastructure Fund, but it’s also relevant to the LTFF.) * Open access fees for research publications relevant to longtermism, such that this work is available to anyone on the internet without any obstacles, plausibly increasing readership and citations. * Research assistants for academic researchers (and for independent researchers if they have a track record and there’s no good organization for them). * Books about longtermism-relevant topics.
4
AmritSidhu-Brar
3y
That's really interesting to read, thanks very much! (Both for this answer and for the whole AMA exercise)

It's actually pretty rare that we've not been able to fund something; I don't think this has come up at all while I've been on the fund (2 rounds), and I can only think of a handful of cases before.

It helps that the fund knows some other private donors we can refer grants to (with applicants permission), so in the rare cases something is out of scope, we can often still get it funded.

Of course, people who know we can't fund them because of the fund's scope may choose not to apply, so the true proportion of opportunities we're missing may be higher. A big c... (read more)

A common case is people who are just shy to apply for funding. I think a lot of people feel awkward about asking for money. This makes sense in some contexts - asking your friends for cash could have negative consequences! And I think EAs often put additional pressure on themselves: "Am I really the best use of this $X?" But of course as a funder we love to see more applications: it's our job to give out money, and the more applications we have, the better grants we can make.

Another case is people (wrongly) assuming they're not good enough. I think a lot of people underestimate their abilities, especially in this community. So I'd encourage people to just apply, even if you don't think you'll get it.

Do you feel that someone who had applied, unsuccessfully, and then re-applied for a similar project (but perhaps having gathered more evidence), would be more likely, less likely, or equally likely to get funding than someone submitting an identical application to the second case, but not having been rejected once before, having chosen to not apply?

It feels easy to get into the mindset of "Once I've done XYZ, my application will be stronger, so I should do those things before applying", and if that's a bad line of reasoning to use (which I suspect it might be), some explicit reassurance might result in more applications.

I've already covered in this answer areas where we don't make many grants but I would be excited about us making more grants. So in this answer I'll focus on areas where we already commonly make grants, but would still like to scale this up further.

I'm generally excited to fund researchers when they have a good track record, are focusing on important problems and when the research problem is likely to slip through the cracks of other funders or research groups. For example, distillation style research, or work that is speculative or doesn't neatly fit into... (read more)

These are very much a personal take, I'm not sure if others on the fund would agree.

  1. Buying extra time for people already doing great work. A lot of high-impact careers pay pretty badly: many academic roles (especially outside the US), some non-profit and think-tank work, etc. There's certainly diminishing returns to money, and I don't want the long-termist community to engage in zero-sum consumption of Veblen goods. But there's also plenty of things that are solid investments in your productivity, like having a comfortable home office, a modern computer

... (read more)

Of course there's lots of things we would not want to (or cannot) fund, so I'll focus on things which I would not want to fund, but which someone reading this might have been interested in supporting or applying for.

  1. Organisations or individuals seeking influence, unless they have a clear plan for how to use that influence to improve the long-term future, or I have an exceptionally high level of trust in them

    This comes up surprisingly often. A lot of think-tanks and academic centers fall into this trap by default. A major way in which non-profits sustain

... (read more)
6
Habryka
3y
I agree that both of these are among the top 5 things that I've encountered that make me unexcited about a grant.

Yes, I think we're definitely limited by our application pool, and it's something I'd like to change.

I'm pretty excited about the possibility of getting more applications. We've started advertising the fund more, and in the latest round we got the highest number of applications we rated as good (score >= 2.0, where 2.5 is the funding threshold). This is about 20-50% more than the long-term trend, though it's a bit hard to interpret (our scores are not directly comparable across time). Unfortunately the percentage of good applications also dropped this r... (read more)

1
JackM
3y
Thanks for this reply. Active grant-making sounds like an interesting idea!

The cop-out answer of course is to say we'd grow the fund team or, if that isn't an option, we'd all start working full-time on the LTFF and spend a lot more time thinking about it.

If there's some eccentric billionaire who will only give away their money right now to whatever I personally recommend, then off the top of my head:

  1. For any long-termist org who (a) I'd usually want to fund at a small scale; and (b) whose leadership's judgement I'd trust, I'd give them as much money as they can plausibly make use of in the next 10 years. I expect that even org

... (read more)
9
Linch
3y
What's your all-things-considered view for probability that the first transformative AI (defined by your lights) will be developed by a company that, as of December 2020, either a) does not exist or b) has not gone through Series A?    (Don't take too much time on this question, I just want to see a gut check plus a few sentences if possible).

Do the LTFF fund managers make forecasts about potential outcomes of grants?

To add to Habryka's response: we do give each grant a quantitative score (on -5 to +5, where 0 is zero impact). This obviously isn't as helpful as a detailed probabilistic forecast, but I think it does give a lot of the value. For example, one question I'd like to answer from retrospective evaluation is whether we should be more consensus driven or fund anything that at least one manager is excited about. We could address this by scrutinizing past grants that had a high varianc... (read more)

Historically I think the LTFF's biggest issue has been insufficiently clear messaging, especially for new donors. For example, we received feedback from numerous donors in our recent survey that they were disappointed we weren't funding interventions on climate change. We've received similar feedback from donors surprised by the number of AI-related grants we make. Regardless of whether or not the fund should change the balance of cause areas we fund, it's important that donors have clear expectations regarding how their money will be used.

We've edited the... (read more)

7
AnonymousEAForumAccount
3y
  I agree unclear messaging has been a big problem for the LTFF, and I’m glad to see the EA Funds team being responsive to feedback around this. However, the updated messaging on the fund page still looks extremely unclear and I’m surprised you think it will clear up the misunderstandings donors have. It would probably clear up most of the confusion if donors saw the clear articulation of the LTFF’s historical and forward looking priorities that is already on the fund page (emphasis added):  The problem is that this text is buried in the 6th subsection of the 6th section of the page. So people have to read through ~1500 words, the equivalent of three single spaced typed pages, to get an accurate description of how the fund is managed. This information should be in the first paragraph (and I believe that was the case at one point). Compounding this problem, aside from that one sentence the fund page (even after it has been edited for clarity) makes it sound like AI and pandemics are prioritized similarly, and not that far above other LT cause areas. I believe the LTFF has only made a few grants related to pandemics, and would guess that AI has received at least 10 times as much funding. (Aside: it’s frustrating that there’s not an easy way to see all grants categorized in a spreadsheet so that I could pull the actual numbers without going through each grant report and hand entering and classifying each grant.) In addition to clearly communicating that the fund prioritizes AI, I would like to see the fund page (and other communications) explain why that’s the case. What are the main arguments informing the decision? Did the fund managers decide this? Did whoever selected the fund managers (almost all of who have AI backgrounds) decide this? Under what conditions would the LTFF team expect this prioritization to change? The LTFF has done a fantastic job providing transparency into the rationale behind specific grants, and I hope going forward there will be similar
5
Habryka
3y
I agree that both of these are among our biggest mistakes.

I largely agree with Habryka's comments above.

In terms of the contrast with the AWF in particular, I think the funding opportunities in the long-termist vs animal welfare spaces look quite different. One big difference is that interest in long-termist causes has exploded in the last decade. As a result, there's a lot of talent interested in the area, but there's limited organisational and mentorship capacity to absorb this talent. By contrast, the animal welfare space is more mature, so there's less need to strike out in an independent direction. While I'm... (read more)

My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship

 

This might be a small point, but while I would agree, I imagine that strategically there are some possible orgs that could grow more quickly; and due to them growing, could dominate the funding eventually. 

I think one thing that's going on is that right now due to funding constraints individuals are encouraged to create organizations that are efficient when small, as opposed to e... (read more)

1
abergal
3y
Just want to say I agree with both Habryka's comments and Adam's take that part of what the LTFF is doing is bridging the gap while orgs scale up (and increase in number) and don't have the capacity to absorb talent.
1
JackM
3y
Thanks for this reply, makes a lot of sense!

From an internal perspective I'd view the fund as being fairly close to risk-neutral. We hear around twice as many complaints that we're too risk-tolerant than too risk-averse, although of course the people who reach out to us may not be representative of our donors as a whole.

We do explicitly try to be conservative around things with a chance of significant negative impact to avoid the unilateralist's curse. I'd estimate this affects less than 10% of our grant decisions, although the proportion is higher in some areas, such as community building, biosecur... (read more)

9
Gordon Seidoh Worley
3y
This is pretty exciting to me. Without going into too much detail, I expect to have a large amount of money to donate in the near future, and LTF is basically the best option I know of (in terms of giving based on what I most want to give to) for the bulk of that money short of having the ability to do exactly this. I'd still want LTF as a fall back for funds I couldn't figure out how to better allocate myself, but the need for tax deductibility limits my options today (though, yes, there are donor lotteries).

The LTFF chooses grants to make from our open application rounds. Because of this, our grant composition depends a lot on the composition of applications we receive. Although we may of course apply a different bar to applications in different areas, the proportion of grants we make certainly doesn't represent what we think is the ideal split of total EA funding between cause-areas.

In particular, I tend to see more variance in our scores between applications in the same cause-area than I do between cause-areas. This is likely because most of our application... (read more)

I agree with @Habryka that our current process is relatively lightweight which is good for small grants but doesn't provide adequate accountability for large grants. I think I'm more optimistic about the LTFF being able to grow into this role. There's a reasonable number of people who we might be excited about working as fund managers -- the main thing that's held us back from growing the team is the cost of coordination overhead as you add more individuals. But we could potentially split the fund into two sub-teams that specialize in smaller and larger gr... (read more)

7
Habryka
3y
Ah yeah, I also think that if the opportunity presents itself we could grow into this role a good amount. Though I think on the margin I think it's more likely we are going to invest even more into more early-stage expertise and maybe do more active early-stage grantmaking.

The LTFF is happy to renew grants so long as the applicant has been making strong progress and we believe working independently continues to be the best option for them. Examples of renewals in this round include Robert Miles, who we first funded in April 2019, and Joe Collman, who we funded in November 2019. In particular, we'd be happy to be the #1 funding source of a new EA org for several years (subject to the budget constraints Oliver mentions in his reply).

Many of the grants we make to individuals are for career transitions, such as someone retrainin... (read more)

2
Linda Linsefors
3y
My impression is that it is not possible for everyone who want to help with the long term ti get hired by an org, for the simple reason that there are not enough openings at those orgs. At least in AI Safety, all entry level jobs are very competitive, meaning that not getting in is not a strong signal that one could not have done well there.  Do you disagree with this?

As part of CEA's due diligence process, all grantees must submit progress reports documenting how they've spent their money. If a grantee applies for renewal, we'll perform a detailed evaluation of their past work. Additionally, we informally look back at past grants, focusing on grants that were controversial at the time, or seem to have been particularly good or bad.

I’d like us to be more systematic in our grant evaluation, and this is something we're discussing. One problem is that many of the grants we make are quite small: so it just isn't cost-effect... (read more)

7
MichaelA
3y
Interesting question and answer! Do  the LTFF fund managers make forecasts about potential outcomes of grants?  And/or do you write down in advance what sort of proxies you'd want to see from this grant after x amount of time? (E.g., what you'd want to see to feel that this had been a big success and that similar grant applications should be viewed (even) more positively in future, or that it would be worth renewing the grant if the grantee applied again.) One reason that that first question came to mind was that I previously read a 2016 Open Phil post that states: (I don't know whether, how, and how much Open Phil and GiveWell still do things like this.)
2
Cullen
3y
Thank you!

I'm sympathetic to the general thrust of the argument, that we should be reasonably optimistic about "business-as-usual" leading to successful narrow alignment. I put particular weight on the second argument, that the AI research community will identify and be successful at solving these problems.

However, you mostly lost me in the third argument. You suggest using whatever state-of-the-art general purpose learning technique exists to model human values, and then optimise them. I'm pessimistic about this since it involves an adversarial relationship between

... (read more)
Load more