Hide table of contents

Situation

As far as I can tell The Long-Term Future Fund (LTFF) wants to fund work that will influence public policy. They say they are looking to fund “policy analysis, advocacy ...” work and post in policy Facebook groups asking for applications.

However, as far as I can tell, in practice, the LTFF appears to never (or only very rarely) fund such projects that apply for funding. Especially new projects.

My view that such funding is rare is based on the following pieces of evidence:

  • Very few policy grants have been made. Looking at the payout reports of the LTFF, they funded a grant for policy influencing work in 2019 (to me, for a project that was not a new project). Upon asking them they say they have funded at least one more policy grant that has not been written up.
  • Very many policy people have applied for LTFF grants. Without actively looking for it I know of: someone in an established think tank looking for funds for EA work, 3 groups wanting to start EA think tank type projects, a group wanting to do mass campaigning work. All these groups looked competent and were rejected. I am sure many others apply too. I know one of these has gone on to get funding elsewhere (FTX).
  • Comments from staff at leading EA orgs. In January last year, a senior staff member at a leading EA institution mentioned, to my surprise, that EA funders tend not to fund any new longtermist policy projects (except perhaps with very senior trusted people like OpenPhil funding CSET). Recently I spoke to someone at CEA about this and asked if it matched their views too and they said they do think there is a problem here. Note this was about EA funders in general, not specifically the LTFF.
  • Comments from EA policy folk looking for funding. There seems to be (at least there was in 2021) a general view from EAs working in the policy space that it has been very hard to find funding for policy work. Note this is about EA funders in general, not the LTFF.
  • Odd lines of questioning. When I applied, the line of questioning was very odd. I faced an hour of: Why should we do any policy stuff? Isn't all policy work a waste of time?  Didn’t [random unconnected policy thing] not work? Etc. Of course it can be useful to check applicants have a good understanding of what they are doing, but it made me question whether they wanted to fund policy work at all. [Edit: apparently they do this style of questioning for various topics like AI safety research not just policy, so maybe not an issue.]
  • Odd feedback. Multiple applicants to the LTFF have reported receiving feedback along the lines of: We see high downside risks but cannot clarify what those risks are. Or: We want to fund you but an anonymous person vetoed you and we cannot say who or why. Or: Start-up policy projects are too risky. Or: We worry you might be successful and hard to shut down if we decided we don’t like you in future. This makes me worry that the fund managers do not think through the risks and reasons for funding or not funding policy work in as much depth as I would like, and that they maybe do not fund any new/start-up policy projects.
  • Acknowledgment that they do apply a higher bar for policy work. Staff at the LTFF have told me that they apply a higher bar for policy work than for other grants.

Of course, this is all circumstantial, and not necessarily a criticism of the LTFF. The fund managers might argue they never get any policy projects worth funding and that all the promising projects I happened to hear of were actually net negative and it was good not to fund them. It is also possible that things have improved in the last year (the notes that make up this post have been sitting in an email chain for a long while now).
 

Relevance and recommendation

That said I thought it was worth me writing this up publicly as the possibility that the LTFF (and maybe other funders) are systematically not funding any new policy advocacy projects is relevant for:

  • Applicants. I don’t want people who have not received funding to be demotivated. To all those folk out there looking to run an EA policy project but have struggled with funding from the LTFF, or elsewhere, I just want to say: Heads up, don’t be put off, the work you do may well be of high value. Stop applying to the LTFF. There are funders more keen on policy such as SFP/SFF or FLI so consider applying there or talking to local funders in your area! Also such founders should feel free to reach out to me if they want an independent view on their project and advice on finding funding.
  • Other funders. I don’t want other funders to hold back from funding as they think the LTFF will cover policy stuff. Maybe these funders being aware of exactly how rare it is for the LTFF to fund (new) projects aimed at influencing public policy, will help ensure they are fully informed and can make the best decisions.
  • Both funders and policy folk. Maybe there is a discussion to be had here as to why this might be going on. If there are lot of entrepreneurial policy folk applying for funding and not getting it then maybe the entrepreneurial policy folk in EA have a very different view than funders of what kinds of policy projects are valuable. Maybe this is a communication issue and we can bring it into the light. My hope is that a public post could spark some discussion about what kinds of policy projects would bring value to the world, and that this could be of use to both funders and policy entrepreneurs.

Overall, more transparency could help. I would love to hear from funders (LTFF and otherwise) a bit more about what they are looking for. I think this could lead to either push back (why are you looking for that?) or better applications or both. And if the LTFF are not funding policy work, or only funding certain types of policy work, or if they are applying a higher bar to policy projects then they should say this publicly on their website!
 

Opinions & speculation

LTFF positives.

I would note that it is only possible to make this post because the LTFF has written public payout reports and given feedback to people who have applied for funding. These are both really good and deserving of praise. There is much the LTFF gets correct, and this critique should be seen in this light. (I also note that my interactions with staff at the LTFF have been extremely positive and I have a lot of respect for them.)

Does this apply to other funders?

I also worry similar problems apply elsewhere in EA but are only noticeable in the LTFF as the LTFF are more transparent and easier to apply to than other funders. Given that senior staff at EA orgs have suggested that there is a general reticence to fund new policy work, it seems plausible that many other (longtermist?) funders are also not funding new policy projects.

A guess as to what might be going on.

I speculate that there is an underfunding of policy projects and that this might be connected to EA funders approach to minimising risks.

EA funders are risk averse and want to avoid funding projects with downside risks. I think this is super important. Most movements that exists fail and fracture and collapse and that EA has succeeded as much as it has is perhaps down to this caution. 

However I think:

  1. Funders and policy actors might have very different views is on how to manage the risks of policy work, meaning they prefer different kinds of projects. Based on the conversations it seems that funders believe that a reasonable plan to manage the risks is to look for very senior (say 20 years experience) that they know very well to start new policy projects. Also they might give veto power to a range of external advisors. On the other hand policy folk think it is more important to be cautious about the kinds of projects started and risk mitigation plans, but they note that senior people often come with political baggage that increases risks whereas junior folk can fly under the political radar. Also they note that giving vetoes has problems as if different people veto different things a funders de-facto bar could be significantly higher than any single person's bar.
  2. It is not clear to me that funders are considering the risks of not funding projects. Firstly funding can be used by projects to directly decrease risks of that project, especially if it is given with this purpose in mind, such as to hire a good comms person. Secondly, new projects can decrease policy risks, e.g. a project that supports longtermist academics to engage with policy makers believes they have cut not increased the risks. Thirdly, not funding can miss key policy windows, like COVID. 
    Additionally, there is the risk that a default of not funding action feeds into a culture of EA only ever researching and never acting. If it is the case that many funders are applying a much higher bar to doing/influencing projects than to thinking/research projects (as the LTFF say they do) then this could lead to bad community dynamics.  In particular it might be feeding into a culture where doers are ostracised or not supported and thinkers are welcome in with open arms. People have recently been raising this as an issue with EA (or longtermism) culture on this forum see here (see comments too) and here and here.

The above is speculative. It is my current best guess as the crux behind any difference in views between funders and policy folk. That said I would not be surprised if I was wrong on this.

I think both sides have their biases and I would be keen for both sides to talk more on this or I think at a minimum funders should be transparent about their views on funding policy work and minimising risks. If nothing else it could save people wasted time applying for funding they cannot get.

I expect this is worth funders time. I had the impression that as of 2021 a lot of talented policy people who could start new projects were not doing so because of a real or perceived belief that there was a lack of funding. (FTX may have fixed or partly fixed this I don’t know.)

 


Thank you to the LTFF team and many others for feedback on an early draft and for all the great work they do. Entered as part of the EA red team challenge.

Edit: I have now written up a long list of longtermist policy projects that should be funded, this gives some idea how big the space of opportunities is here: List of donation opportunities (focus: non-US longtermist policy work) 

Comments49
Sorted by Click to highlight new comments since: Today at 5:10 AM

Why should we do any policy stuff? Isn't all policy work a waste of time?  Didn’t [random unconnected policy thing] not work? Etc.

This was a conversation with me, and sadly strikes me as a strawman of the questions asked (I could recount the conversation more in-full, though that feels a bit privacy violating).

Of course on the LTFF I do not take it as a given that any specific approach to improving the long term future will work, and I question actively whether any broad set of approaches has any chance to be competitive with other things that we can do. It's important that the LTFF is capable of coming to the conclusion that policy work is in-general unlikely to be competitive, and just because there are already a bunch of people working in policy, doesn't mean I no longer question whether the broad area might be an uphill battle.

I personally am probably the person on the LTFF most skeptical of policy work, especially of work that aims to do policy-analysis embedded within government institutions, or that looks like it's just a generic "let's try to get smarter people into government" approach. I've seen this fail a good number of times, and many interviews I've done with policy people suggests to me that many people report finding it very hard to think clearly when embedded within government institutions. I also think generic power-seeking behavior where we try to "get our guys into government" is somewhat uncooperative and also has detrimental epistemic effects on the community.

Other people on the LTFF and the rest of the funding ecosystem seem to be more optimistic here, though one thing I've found from discussing my cruxes here for many hundreds of hours with others is that people's models of the policy space drastically differ, and most people are pessimistic about most types of policy work (though then often optimistic about a specific type of policy work). 

The  questions I actually asked were of the type "how do you expect to overcome these difficult obstacles that seem to generally make policy work ineffective, that seem reported by many people in policy?". I don't think we've had really any policy successes with regards to the Long Term Future, so if a project does not have compelling answers to this kind of question, I am usually uninterested in funding it, though again, others on the LTFF have a pretty different set of cruxes. 

I often ask the same types of question with regards to AI Alignment research: "it seems like we haven't really found much traction with doing AI Alignment research, and it seems pretty plausible to me that we don't really know how to make progress in this domain. Why do you think we can make progress here?". I do have a higher prior on at least some AI Alignment research working, and also think the downside risk from marginal bad AI Alignment research is less than from marginal policy advocacy or intervention, so my questions tend to be a bit more optimistically framed, or I can contribute more of the individual model pieces.

Thank you Habryka. Great to get your views.

This was a conversation with me, and sadly strikes me as a strawman of the questions asked (I could recount the conversation more in-full, though that feels a bit privacy violating). ... I often ask the same types of question with regards to AI Alignment research

Apologies if I misrepresented this in some way (if helpful and if you have  a  recording or notes happy for things I said and you said to be public). I have not had this kind of questioning on any other funding applications and it felt very strange to me. I said in an email to Caleb (EA funds) recently that "it would surprise me a lot if people applying for grants to do say AI safety technical work got an hour of [this type]". So perhaps count me as surprised. If this kind of questioning is just an idoscyratic Habryka tool for getting to better grips with applicants of different types then I am happy with it. Will edit the post.

 

Of course on the LTFF I do not take it as a given that any specific approach to improving the long term future will work

I guess it depends how specific you are being. Obviously I don’t think it should be taken as given that "specific think tank plan x" would be good, but I do think it is reasonable for a fund to take it as given that at a high level "policy work" would be good. And if the LTFF does not think this then why does the LTFF actively outreach to policy  people to get them to apply?

(I recognise there may be a difference here between you Habryka and the rest of LTFF, as you say you are more sceptical than others)

 

I personally am probably the person on the LTFF most skeptical of policy work, especially of work that aims to do policy-analysis embedded within government institutions, or that looks like it's just a generic "let's try to get smarter people into government" approach. 

Now it is my turn to claim a strawman.  I have never applied to the LTFF with a plan anything close to a "let's try to get smarter people into government" approach. Nor were any of the 5 applications to the LTFF I am aware of anything like this approach. 

FWIW, I think this kind of questioning is fairly Habryka-specific and not really standard for our policy applicants; I think in many cases I wouldn’t expect that it would lead to productive discussions (and in fact could be counterproductive, in that it might put off potential allies who we might want to work with later).

I make the calls on who is the primary evaluator for which grants; as Habryka said, I think he is probably most skeptical of policy work among people on the LTFF, and hasn’t been the primary evaluator for almost any (maybe none?) of the policy-related grants we’ve had. In your case, I thought it was unusually likely that a discussion between you and Habryka would be productive and helpful for my evaluation of the grant (though I was interested primarily in different but related questions, not “whether policy work as a whole is competitive with other grants”), because I generally expect people more embedded in the community (and in the case above, you (Sam) in particular, which I really appreciate), to be more open to pretty frank discussions about the effectiveness of particular plans, lines of work, etc.

FWIW I think if this is just how Habryka works then that is totally fine from my point of view. If it helps him make good decisions then great. 

(From the unusualness of the questioning approach and the focus on "why policy" I took it to be a sign that the LTFF was very sceptical of policy change as an approach compared to other approaches, but I may have been mistaken in making this assumption based on this evidence.)

I guess it depends how specific you are being. Obviously I don’t think it should be taken as given that "specific think tank plan x" would be good, but I do think it is reasonable for a fund to take it as given that at a high level "policy work" would be good. And if the LTFF does not think this then why does the LTFF actively outreach to policy  people to get them to apply?

I don't think we should take it as a given! I view figuring out questions like this as most of our job, so of course I don't want us to have an institutional commitment to a certain answer in this domain.

And if the LTFF does not think this then why does the LTFF actively outreach to policy  people to get them to apply?

In order to believe that something could potentially be furthered by someone, or that it has potential, I don't think I have to take it as a given that work in that general area "would be good". 

I also think it's important to notice that the LTFF page only lists "policy research" and "advocacy", and doesn't explicitly list "policy advocacy" or "policy work" more broadly (see Asya's clarification below). I don't think we currently actively solicit a lot of policy work for the LTFF, though maybe other fund managers who are more optimistic about that type of work have done more soliciting. 

And separately, the page of course reflects something much closer to the overall funds view (probably with a slant towards Asya, since she is head of the LTFF), and this is generally true for our outreach, and I think it's good and valuable to have people with a diversity of views on the LTFF (and for people who are more skeptical of certain work to talk to the relevant grantees).

Now it is my turn to claim a strawman.  I have never applied to the LTFF with a plan anything close to a "let's try to get smarter people into government" approach. Nor were any of the 5 applications to the LTFF I am aware of anything like this approach.  

Sorry! Seems like this is just me communicating badly. I did not intend to imply (though I can now clearly see how one might read it as such) that your work in-particular falls into this category. I was trying to give some general reasons for why I am skeptical of a lot of policy work (I think only some of these reasons apply to your work). I apologize for the bad communication here.

I also think it's important to notice that the LTFF page only lists "policy research" and "advocacy", and doesn't explicitly list "policy advocacy" or "policy work" more broadly 

The page says the LTFF is looking to fund projects on "reducing existential risks through technical research, policy analysis, advocacy, and/or demonstration projects" which a casual reader could take to include policy advocacy. If the aim of the page is to avoid giving the impression that policy advocacy is something the LTFF actively looks to fund then I think it could do a better job.

I don't think we currently actively solicit a lot of policy work for the LTFF,

Maybe this has stopped now. Last posts I saw was Dec 2021 from an EA Funds staff who has now left (here). Posts said things like: "we'd be excited to consider grants related to policy and politics. We fund all kinds of projects" (here). It is plausible to me that things like that were giving people the wrong impressions and the LTFFs willingness to fund certain projects.

 

– – 

I don't think we should take it as a given! I view figuring out questions like this as most of our job, ... 

Fair enough. That seems like a reasonable approach too and I hope it is going well and you are learning a lot!!

 

– – 

Sorry! Seems like this is just me communicating badly

No worries. Sorry too for any ways I have misrepresented our past interactions.  Keep being wonderful <3 

Thanks for posting this comment, I thought it gave really useful perspective.

"I don't think we've had really any policy successes with regards to the Long Term Future"

This strikes me as an odd statement. If you're talking about the LTF fund, or EA long-termism, it doesn't seem like much policy work has been funded.

If you're talking more broadly, wouldn't policy wins like decreasing the amount of lead being emitted into the atmosphere (which has negative effects on IQ and health generally) be a big policy win for the long term future?

This strikes me as an odd statement. If you're talking about the LTF fund, or EA long-termism, it doesn't seem like much policy work has been funded.

I think this is false, e.g. a reasonable subset of Open Phil's Transformative AI risks grantmaking is on policy. 

This strikes me as an odd statement. If you're talking about the LTF fund, or EA long-termism, it doesn't seem like much policy work has been funded.

Huh, why do you think that? CSET was Open Phil's largest grant to date, and I know of at least another $20MM+ in policy projects that have been funded. 

Sadly, I think a lot of policy grants are announced less publicly, because publicity is usually harmful for policy projects or positions (which I think is at least some evidence of them being at least somewhat adversarial/non-cooperative, which is one of the reasons why I have a somewhat higher prior against policy projects). Approximately all policy applications to the LTFF end up requesting that we do not publish a public writeup on them, so we often refer them to private funders if we think they are a good idea. 

I guess I was just wrong, I hadn't looked into it much!

"I don't think we've had really any policy successes with regards to the Long Term Future"

 

Bias view incoming .... :

I think LTFF's only (public) historical grant for policy advocacy, to the APPG for Future Generations, has led to better policy in the UK, in particular on risk management. For discussions on this see impact reports here and here, independent reviews here and here, and criticism here

Additionally I think CLTR has been doing impactful long-term focused policy work in the UK.

If you're talking more broadly, wouldn't policy wins like decreasing the amount of lead being emitted into the atmosphere (which has negative effects on IQ and health generally) be a big policy win for the long term future?

Yeah, I think this is definitely a candidate for a great intervention, though I think importantly it wasn't the result of someone entering the policy space with a longtermist mindset. 

If someone had a concrete policy they wanted to push for (or a plan for discovering policies) of that magnitude, then I would likely be excited about funding it, though I would still be somewhat worried how likely it would be to differentially accelerate development of dangerous technologies vs. increase humanities ability to navigate rapid technological change (since most risk to the future is anthropogenic, I am generally skeptical of interventions that just speed up technological progress across the board), but my sense is abating lead poisoning looks better than most other things on this dimension.

An offshoot of lead emission in the atmosphere might be the work being done at LEEP (Lead Exposure Elimination Project) https://forum.effectivealtruism.org/posts/ktN29JneoQCYktqih/seven-more-learnings-from-leep

(I work for the LTFF)

Other people on the LTFF and the rest of the funding ecosystem seem to be more optimistic here, though one thing I've found from discussing my cruxes here for many hundreds of hours with others is that people's models of the policy space drastically differ, and most people are pessimistic about most types of policy work (though then often optimistic about a specific type of policy work). 

Surely something akin to this critique can also be levied at e.g. alignment research. 

Oh, sorry, I didn't intend this at all as a critique. I intended this as a way to communicate that I don't think I am that alone in thinking that most policy projects are pretty unlikely to be helpful.

Sorry "critique" was poor choice of words on my part. I just meant "most LT plans will fail, and most LT plans that at least some people you respect like will on an inside view certainly fail" is  just the default for trying to reason well on the frontier of LT stuff. But I'm worried that the framing will sound like you meant it narrowly for policy. Also, I'm worried your implied bar for funding policy is higher than what LTFF people (including yourself) actually use.

Hmm, yeah, I think we are both using subpar phrasing here. I think this is true for both policy and AI Alignment, but for example less true for biorisk, where my sense is there is a lot more people agreeing that certain interventions would definitely help (with some disagreement on the magnitude of the help, but much less than for AI Alignment and policy).

I agree about biosecurity, sure. Although, I actually think we're much less conceptually confused about biosecurity policy than we are about AI policy. For example, pushing for a reasonable subset of the Apollo report seems reasonable to me.

Yeah, I think being less conceptually confused is definitely part of it. 

Hey, Sam – first, thanks for taking the time to write this post, and running it by us. I’m a big fan of public criticism, and I think people are often extra-wary of criticizing funders publicly, relative to other actors of the space.

Some clarifications on what we have and haven’t funded:

  • I want to make a distinction between “grants that work on policy research” and “grants that interact with policymakers”.
    • I think our bar for projects that involve the latter is much higher than for projects that are just doing the former.
  • I think we regularly fund “grants that work on policy research” – e.g., we’ve funded the Centre for Governance of AI, and regularly fund individuals who are doing PhDs or otherwise working on AI governance research.
  • I think we’ve funded a very small number of grants that involve interactions with policymakers – I can think of three such grants in the last year, two of which were for new projects. (In one case, the grantee has requested that we not report the grant publicly).

Responding to the rest of the post:

  • I think it’s roughly correct that I have a pretty high bar for funding projects that interact with policymakers, and I endorse this policy. (I don’t want to speak for the Long-Term Future Fund as a whole, because it acts more like a collection of fund managers than a single entity, but I suspect many others on the fund also have a high bar, and that my opinion in particular has had a big influence on our past decisions.)
  • Some other things in your post that I think are roughly true:
    • Previous experience in policy has been an important factor in my evaluations of these grants, and all else equal I think I am much more likely to fund applicants who are more senior (though I think the “20 years experience” bar is too high).
    • There have been cases where we haven’t funded projects (more broadly than in policy) because an individual has given us information about or impressions of them that led us to think the project would be riskier or less impactful than we initially believed, and we haven’t shared the identity or information with the applicant to preserve the privacy of the individual.
    • We have a higher bar for funding organizations than other projects, because they are more likely to stick around even if we decide they’re not worth funding in the future.
  • When evaluating the more borderline grants in this space, I often ask and rely heavily on the advice of others working in the policy space, weighted by how much I trust their judgment. I think this is basically a reasonable algorithm to follow, given that (a) they have a lot of context that I don’t, and (b) I think the downside risks of poorly-executed policy projects have spillover effects to other policy projects, which means that others in policy are genuine stakeholders in these decisions.
    • That being said, I think there’s a surprising amount of disagreement in what projects others in policy think are good, so I think the particular choice of advisors here makes a big difference.
  • I do think projects interacting with policymakers have substantial room for downside, including:
    • Pushing policies that are harmful
    • Making key issues partisan
    • Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with
    • “Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project
  • I suspect we also differ in our views of the upsides of some of this work– a lot of the projects we’ve rejected have wanted to do AI-focused policy work, and I tend to think that we don’t have very good concrete asks for policymakers in this space.

Thank you Abergal, I hope my critique is helpful. I mean it to be constructive.

I don’t think I disagree with anything at all that you wrote here!! So glad we are mostly on the same page.

(In fact you you suggest "we also differ in our views of the upsides of some of this work" and I am not sure that is the case. I am fairly sceptical of much of it, especially more AI focused  stuff.)

I still expect the main disagreements are on:

  1. Managing downside risks. I worry that if we as a community don’t put time and effort into understanding how to mitigate downside risk well we will make mistakes. Mostly I worry that we are pushing away anyone who could do direct(phase 2) type work, but also that we make some projects higher risk by denying funding, and that if you have a range of experts with veto power and extremely different views then perhaps between them every possible idea is vetoed.
  2. Transparency. I have always been more in favour of transparency than others. I think it now being public that LTFF has "a higher bar for funding organizations" and "much higher bar for projects that involve interacting with policymakers” [paraphrased] is helpful. Also I know giving feedback is a perk not even an expected action of a funder and that the LTFF is better than it than most, but would try to have a transparency by default for expert advisers etc.

If it is helpful happy to discuss either of these points further.

 

Also super great to hear that you have made three grants in the last year to projects that interact with policymakers!! Even if this is "very small" compared to other grants it is more than previous years. I look forward to hearing what some of them are if there is a future write-up. :-)

I strongly agree with Sam on the first point regarding downside risks. My view, based on a range of separate but similar interactions with EA funders, is that they tend to overrate the risks of accidental harm [1] from policy projects, and especially so for more entrepreneurial, early-stage efforts.

To back this up a bit, let's take a closer look at the risk factors Asya cited in the comment above. 

  • Pushing policies that are harmful. In any institutional context where policy decisions matter, there is a huge ecosystem of existing players, ranging from industry lobbyists to funders to media outlets to think tanks to agency staff to policymakers themselves, who are also trying to influence the outcomes of legislation/regulation/etc. in their preferred direction. As a result, making policies actually become reality is inherently quite hard and almost impossible to make happen without achieving buy-in from a diverse range of stakeholders. While that process can be frustrating and often results in watering down really good ideas to something less inspiring, it is actually quite good for mitigating the downside risks from bad policies! It's understandable to think of such a volatile mix of influences as scary and something to be avoided, but we should also consider the possibility that it is a productive way to stress-test ideas coming out of EA/longtermist communities by exposing them to audiences with different interests and perspectives. After all, these interests at least in part do reflect the landscape of competing motivations and goals in the public more generally, and thus are often relevant for whether a policy idea will be successful or not.
  • Making key issues partisan. My view is that this is much more likely to happen by way of involvement in electoral politics than traditional policy-advocacy work. Importantly, though, we just had a high-profile test of this idea in the form of Carrick Flynn's bid for Congress. By the logic of EA grantmakers worried about partisan politicization, my sense is that the Flynn campaign is one of the riskiest things this community has ever taken on (and remember, we only saw the primary -- if he had won and run in the general, many Republican politicians' and campaign strategists' first exposure to EA and longtermism would have been by way of seeing a Democrat supported by two of the largest Democratic donors running on EA themes in a competitive race against one of their own.) And yet as it turned out, it did not result in longtermism being politicized basically at all. So while the jury is still out, perhaps a reasonable working hypothesis based on what we've seen thus far is that "try to do good and help people" is just not a very polarizing POV for most people, and therefore we should stress out about it a little less.
  • Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with. I think this one is pretty easily avoided. If you have someone leading a policy initiative who is any of those things, they probably aren't going to make much progress and their work thus won't cause much harm (other than wasting the grantmaker's money). Furthermore, the increasing media coverage of longtermism and the fact that longtermism has credible allies in society (multiple billionaires, an increasing number of public intellectuals, etc.) both significantly mitigate against the concern expressed here, as the former factors are much more likely to influence a broad set of policymakers' opinions and actions.
  • “Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project. This seems to be more of a general concern about grantmaking to early-stage organizations and doesn't strike me as unique to the policy space at all. If anything, it seems to rest on a questionable premise that there is only one channel for communicating with policymakers and only one organization or individual can occupy that channel at a time. As I stated earlier, policymakers already have huge ecosystems of people trying to influence policy outcomes; another entrant into the mix isn't going to take up much space at all. But also, policymakers themselves are part of a huge bureaucratic apparatus and there are many, many potential levers and points of access that can't all possibly be covered by a single organization. I do agree that coordination is important and desirable, but we shouldn't let in itself that be a barrier to policy entrepreneurship, IMHO.

To be clear, I do think these risks are all real and worth thinking about! But to my reasonably well-informed understanding of at least three EA grantmakers' processes, most of these projects are not judged by way of a sober risk analysis that clearly articulates specific threat models, assigns probabilities to each, and weighs the resulting estimates of harm against a similarly detailed model of the potential benefits. Instead, the risks are assessed on a holistic and qualitative basis, with the result that many things that seem potentially risky are not invested in even if the upside of them working out could really be quite valuable. Furthermore, the risks of not acting are almost never assessed -- if you aren't trying to get the policymaker's attention tomorrow, who's going to get their ear instead, and how likely might it be that it's someone you'd really prefer they didn't listen to?

While there are always going to be applications that are not worth funding in any grantmaking process, I think when it comes to policy and related work we are too ready to let perfect be the enemy of the good.

  1. ^

    Important to note that the observations here are most relevant to policymaking in Western democracies; the considerations in other contexts are very different.

“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.

On potential risk factors:

  • I agree that (1) and (2) above are very unlikely for most grants (and are correlated with being unusually successful at getting things implemented).
  • I feel less in agreement about (3)-- my sense is that people who want to interact with policymakers will often succeed at taking up the attention of someone in the space, and the people interacting with them form impressions of them based on those interactions, whether or not they make progress on pushing that policy through.
  • I think (4) indeed isn’t specific to the policy space, but is a real downside that I’ve observed affecting other EA projects– I don’t expect the main factor to be that there’s only one channel for interacting with policymakers, but rather that other long-term-focused actors will perceive the space to be taken, or will feel some sense of obligation to work with existing projects / awkwardness around not doing so.

Caveating a lot of the above: as I said before, my views on specific grants have been informed heavily by others I’ve consulted, rather than coming purely from some inside view.

Thanks for the response!

I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside

That's fair, and I should also be clear that I'm less familiar with LTFF's grantmaking than some others in the EA universe.

It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.

Oh, I totally agree that the kind of risk analysis I mentioned is not costless, and for EA Funds in particular it seems like too much to expect. My main point is that in the absence of it, it's not necessarily an optimal strategy to substitute an extreme version of the precautionary principle instead.

Overall, I agree that judging policy/institution-focused projects primarily based on upside makes sense.

and that if you have a range of experts with veto power and extremely different views then perhaps between them every possible idea is vetoed

Just to be clear, the LTFF generally has a philosophy of funding things if at least one fund member is excited about them, and a high bar for a fund member to veto a grant that someone else is championing, so I don't think "every possible idea getting vetoed" is a very likely failure mode of the LTFF. 

The idea that was suggested to me by an EA policy person was not that other fund members veto but that external experts veto.

The hypothetical story is that a fund manager who is not an expert in policy who is struggling to evaluate a policy grant that they worry is high-risk might invite a bunch of external experts to give a view and have a veto and, given the risk, if any of those experts veto they might choose not to fund it.

This then goes wrong if and experts veto for very different reasons (more likely if "there’s a surprising amount of disagreement in what projects [experts] think are good"). In the worse case it could be that almost every policy grant gets vetoed by one expert or another and none get funded. This could significantly raise the bar and it might take a while for a fund manager to notice. 

Honestly I have no idea if this actually happens or ever works like this. If this is not how things work then great! Mostly trying to bring out into the open the various tools that people can use to evaluate more risky projects and to flag the potential uses and also downsides for discussion.

Oh, hmm, I think we rarely request feedback from more than one or two experts, so I would be somewhat surprised if this has a large effect. But yeah, definitely in biorisk and policy, I think if the expert we ping tends to have a negative take, we probably don't fund it (and now that you say it, it does feel a bit more like we are asking more experts on policy grants than other grants, so there might be some of the effect that you are describing going on). 

If these experts regularly have a large impact on these decisions, that's an argument for transparency about them. This is a factor that could of course be outweighed by other considerations (ability to give frank advice, confidentiality, etc). Perhaps might be worth asking them how they'd feel about being named (with no pressure attached, obviously).

Also, can one volunteer as an expert? I would - and I imagine others (just on this post, perhaps Ian and Sam?) would too.

or funders could, you know, always hire more fund managers who have policy experience!

I don't think hiring for this is easy. You need aligned people with good judgement, significant policy experience, and counterfactual impact that is equal to or worse impact than working for LTFF. Plus, ideally good communication ability and a bit of creativity too. 

Sure, but if there are 4 people out there with CS backgrounds who fit the bill there are probably a few without who do too.

The other thing is the idea that “policy” is this general thing seems a little off to me. Someone who knows a thing or two about Congress may not have context or network to evaluate something aimed at some corner of the executive branch, to say nothing of evaluating a policy proposal oriented towards another country.

The other thing is the idea that “policy” is this general thing seems a little off to me

Hmm I think this makes the problem harder, not easier.

Was this always true? I (perhaps arrogantly) thought that there was a push for greater grantmaker discretion after my comment here (significantly before I joined LTFF), though it was unclear if my comment had any causal influence 

I think we always had a culture of it, but I do think your comment had a causal influence on us embedding that more into the decision-making process.

Hi Abergal,

I hope you don’t mind me returning to this post. I wanted to follow up on a few things but it has been a super busy month and I have not really had the time.

Essentially I think we did not manage to engage with each others cruxes at all. I said I expect the crux is "connected to EA funders approach to minimising risks." You did not discuss this at all in your reply (except to agree that there are downsides to policy work). But then you suggest you maybe a crux is "our views of the upsides of some of this work". And then I replied to you but did not discuss that point at all!! 

So overall it felt a bit like we just talked past each other and didn’t really engage fully with the others points of view. So I felt I should just come back and respond to you about the upsides rather than leave this as was.

– – 

Some views I have (at least from a UK perspective, cannot comment on US):

  • DISAGREEMENT: I think there is no shortage of policy proposals to push for. You link to Luke's post and I reply directly to that here. There is the x-risk database of 250+ policy proposals here!! There is work on policy ideas in Future Proof here. There are more clear cut policies on general risk management and biosecurity but there are  good ideas on AI too like: government to be cautious in AI use and development by military (see p42 of Future Proof) or ensuring government has incentives to hire competent risk-aware staff . I never found that I was short of low-downside things to advocate for on AI at any point in my 2 years working on policy. So maybe we disagree there. Very happy to discuss further if you want.
  • AGREEMENT: I think policy may still be low impact. Especially AI policy (in the UK) has a very minimal chance of reducing AI x-risks, a topic that is not currently in the policy Overton window. Bio policy is probably more directly useful. I do expect bio and general risk policy is more likely to be directly useful but still the effects on x-risks are likely to be small, at least for a while. (That said I think most current endeavours to reduce AI x-risks or other x-risks are low probability of success, I put most weight on technical research but even that is highly uncertain). I expect we agree there.
  • POSSIBLE DISAGREEMENT: I think AI is not the only risk. I didn’t bring up AI at all in my post and none of the examples I know of that applied to the LTFF were AI focused. Yet you and Habryka bring up AI multiple times in your responses. I think AI risks are ~3x more likely than bio and unknown unknow risks (depending on the timeframe) and I think bio and unknown unknow risks are ~6x easier to address though policy work (in the UK). Maybe this is a crux. If the LTFF thinks AI is 1000x more pressing than other risks then maybe this is why you do not value policy work. Could discuss this more if helpful. (If this is true would be great if the LTFF was public about this.)
  • NEUTRAL: I think policy change is pretty tractable and cheap. The APPG for Future Generations seems to consistently drive multiple policy changes, with one every 9 months / £35k spent (see here). I don’t think you implied you had any views on this.

– – 
Anyway I hope that is useful and maybe gets a bit more at some of the cruxes. Would be keen to hear views on if you think any of this is correct, even if you don’t have time to respond in depth to any of these points.

Thank you so much for engaging and listening to my feedback – good luck with future grantmaking!!!

Thank you for sharing, Abergal!

I was just wondering if you could share more concrete examples of "taking up the space" risks. We're facing some choices around this in Australia at the moment and I want to make sure we've considered all downsides of uniting under a shared vision. Are the risks of "taking up the space" mainly:

  1. Less agile - multiple small organizations may be able to work faster
  2. Centralized risk - if one organization among multiple small organizations faces an issue (e.g. brand damage) this is less likely to affect the other organizations
  3. Less diversity of thought - there's value in taking different approaches to problems and having multiple small organizations means were less at less risk of groupthink or quashing diversity of thought

I'd be keen to know if there are others we may not have considered.

All four current fund managers at LTFF have degrees in computer science, and none have experience in policy. Similarly, neither of OpenPhil's two staff members on AI Governance have experience working in government or policy organizations. These grantmakers do incredible work, but this seems like a real blind spot. If there are ways that policy can improve the long-term future, I would expect that grantmakers with policy expertise would be best positioned find them. 

EDIT: See below for the new LTFF grantmaker with exactly this kind of experience :)

All four current fund managers at LTFF have degrees in computer science, and none have experience in policy

This is not up to date, sorry! I think it should be updated in the next few days.

Ah, no worries. Are there any new grantmakers with policy backgrounds?

Rebecca Kagan is currently working as a fund manager for us (sorry for the not-up-to-date webpage).

That's really cool! Seems like exactly the kind of person you'd want for policy grantmaking, with previous experience in federal agencies, think tanks, and campaigns. Thanks for sharing. 

Yep great news!

Is Rebecca still a fund manager, or is the LTFF page out of sync?

(I work for the LTFF)

I think there's three implicit assumptions here that I might disagree with:

  1. Undergrad degrees are important determinants of what someone's background is, in the context of EA work (as opposed to e.g. what they spent their day-to-day thinking about or who they talk to)
  2. EA grantmakers with policy backgrounds are more rosy about policy people and interventions to improve the long-term future than EA grantmakers with CS backgrounds.
  3. Being more rosy than the existing LTFF members about policy interventions is in fact good for improving the long-term future.

Re: 2, I don't think we have strong reasons to think this is true. Anecdotally, I have more of a forecasting background than other LTFF fund managers, and I think I am not more optimistic about forecasting grants than the rest of LTFF.

There's a version of your comment that I do agree with, which is that all else equal it's great for fund managers to have a diversity of contacts and experiences, especially in areas of projects that we good applicants in. To that end, I'm a bit worried that none of current fund managers have done much object-level work in AI alignment, or to a lesser extent technical biosecurity*. 

*In recent months I ended up being the primary investigator for most bio grants. I don't feel experienced enough to be very happy about my judgements, but I think I know enough high-level context and have enough contacts that it's mostly been going okay. This problem is ameliorated somewhat by having only so many biosecurity applications in the first place. Please consider pinging me, Asya, or Caleb if you have feedback on any of the grants I investigated.

I basically agree with this. On 1, undergrad degrees aren’t a great proxy but particularly the people listed on the LTFF site are all career engineers. On 2, your description sounds like the correct general case, but in a case where non-policy people are questioning the effectiveness of any policy work on the grounds that policy is ineffective, I would expect people who’d worked on it to usually have a brighter view given that they’ve chosen to work on it. 3 is of course up for debate and the main question.)

As someone with a fair amount of context on longtermist AI policy-related grantmaking that is and isn't happening, I'll just pop in here briefly to say that I broadly disagree with the original post and broadly agree with [https://forum.effectivealtruism.org/posts/Xfon9oxyMFv47kFnc/some-concerns-about-policy-work-funding-and-the-long-term?commentId=TEHjaMd9srQtuc2W9](abergal's reply).

Hey Luke. Great to hear from you. Also thank you for your push back on an earlier draft where I was getting a lot of stuff wrong and leaping to silly conclusions, was super helpful. FWIW I don’t know how much any of this applies to OpenPhil.

Just to pin down on what it is you agree / disagree with:

For what it is worth I also broadly agree with abergal's reply. The tl;dr of both the original post / abergal's comment is basically the same: hey it [looks like from the outside/ is the case that] the LTFF is applying a a much higher bar to direct policy work than other work.

I guess the original post also said: Hey we might be leaving a bunch of value on the table with this approach, and here are a few reasons why, and that is a thing to think about. Abergal's reply  did not directly address this although I guess it implied that Abergal is happy with the current status quo and/or doesn't see value in exploring this topic. So maybe that is the thing you are saying you agree with.

 

 

Note to anyone still following this: I have now written up a long list of longtermist policy projects that should be funded, this gives some idea how big the space of opportunities is here: List of donation opportunities (focus: non-US longtermist policy work)