W

weeatquince

5600 karmaJoined Sep 2014

Comments
443

I think 90% of the answer to this is risk aversion from funders, especially LTFF and OpenPhil, see here. As such many things struggled for funding, see here.

We should acknowledge that doing good policy research often involves actually talking to and networking with policy people. It involves running think tanks and publishing policy reports, not just running academic institutions and publishing papers. You cannot do this kind of research well in a vacuum. 

That fact combined with funders who were (and maybe still are) somewhat against funding people (except for people they knew extremely well) to network with policy makers in any way, has lead to (maybe is still leading to) very limited policy research and development happening.

 

I am sure others could justify this risk averse approach, and there are totally benefits to being risk averse. However in my view this was a mistake (and is maybe an ongoing mistake). I think was driven by the fact that funders were/are: A] not policy people, so do/did not understand the space so are were hesitant to make grants; B] heavily US centric, so do/did not understand the non-US policy space; and C] heavily capacity constrained, so do/did not have time to correct for A or B.  

 

– – 

(P.S. I would also note that I am very cautious about saying there is "a lack of concrete policy suggestions" or at least be clear what is meant by this. This phrase is used as one of the reasons for not funding policy engagement and saying we should spend a few more years just doing high level academic work before ever engaging with policy makers. I think this is just wrong. We have more than enough policy suggestions to get started and we will never get very very good policy design unless we get started and interact with the policy world.)

Thank you Asya for all the time and effort you have put in here and the way you have manged the fund. I've interacted with the LTFF a number of times and you have always been wonderful: incredibly helpful and sensible. 

Thanks Linch. Agree feedback is time consuming and often not a top priority compared to other goals.

These short summary reasons in this post forwhy grants are not made are great and very interesting to see.

Was wondering do the unsuccessful grant applicants tend to recieve this feedback (of the paragraph summary kind in this post) or do they just get told "sorry no funding"?

I wonder if this could help the situation. I think if applicants have this feedback, and if other granters know that applicants get feedback they can ask for it. I've definitely been asked "where else did you apply and what happened" and been like "I applied for x grant and got feedback xyz of which I agree with this bit but not that bit". (Or maybe that doesn't help for some of the reasons in your " against sharing reasons for rejection" section)

(Also FWIW if there is a private behind the sceens grantmaker feedback channel, I'm not sure I would be comfortable with the idea of grant makers sharing information with each other that they weren't also willing to share with the applicants.)

1.
I really like this list. Lost of the ideas look very sensible.
I also really really value that you are doing prioritisation exercises across ideas and not just throwing out ideas that you feel sound nice without any evidence of background research (like FTX, and others, did). Great work!
 

– – 
2. 
Quick question about the research: Does the process consider cost-effectiveness as a key factor? For each of the ideas do you feel like you have a sense of why this thing has not happened already?
 

– – 

3.
Some feedback on the idea here I know most about: policy field building (although admittedly from a UK not US perspective). I found the idea strong and was happy to see it on the list but I found reading the description of it unconvincing. I am not sure there is much point getting people to take jobs in government without giving them direction, strategic clarity, things to do to or leavers to pull to drive change. Policy success needs an ecosystem, some people in technocratic roles, some in government, some in external policy think tank style research, some in advocacy and lobby groups, etc. If this idea is only about directing people into government I am much less excited by it than a broader conception of field building that includes institution building and lobbying work. 

Also keen on this.

Specifically, I would be interested in someone carrying out an independent impact report for the APPG for Future Generations and could likely offer some funding for this.

why is Tetlock-style judgmental forecasting so popular within EA, but not that popular outside of it?

The replies so far seem to suggest that groups outside of EA (journalists, governments, etc) are doing a smaller quantity of forecasting (broadly defined) than EAs tend to.

This is likely correct but it is also the case that groups outside of EA (journalists, governments, etc) are doing different types of forecasting than EAs tend to. There is less "Tetlock-style judgmental" forecasting and more use of other tools such as horizon scanning, scenario planning, trend mapping, etc, etc.

(E.g. see the UK government Futures Toolkit, although note the UK government also has a more Tetlock-style Cosmic Bazaar) 

So it also seems relevant to ask: why does EA focuses very heavily on "Tetlock-style judgmental forecasting", rather than other forecasting techniques, relative to other groups?

I would be curious to hear people's answers to this half of the question too. Will put my best guess below.

– – 

My sense is that (relative to other futures tools) EA overrates "Tetlock-style judgmental" forecasting a lot and that the world underrates it a bit. 

I think "Tetlock-style" forecasting is the most evidence based, easy to test and measure the value of, futures technique. This appeals to EAs who want everything to be measurable. Although it leads to it being somewhat undervalued by non-EAs who undervalue measurability.

I think the other techniques have been slowly developed over decades to be useful to decision makers. This appeals to decision makers who value being able to make good decisions and having useful tools. Although it leads to them being significantly undervalued by EA folk who tend to have less experience and a "reinvent the wheel" approach to good decision making to the extent that they often don’t even notice that other styles of forecasting and futures work even exist!

Hi John.

Thank you for the feedback and comments.

On deforestation. Just to be clear the result of our prioritisation exercise was our top recommendations (ideas 1-2) on subscription models for new antibiotics and stopping dangerous dual use research. The ideas 4-7 (including the deforestation one) did well in our early prioritisation but ultimately we did not recommend them. I have made a minor edit to the post to try to make this clearer. 

The stopping deforestation report idea was originally focused on limiting the human animal interface to prevent zoonotic pandemics (which did well in our prioritisation). Then in the report we prioritise between the ways one might go about stopping zoonoses. The summary is:

ApproachScale of issueImpact of approachTractabilityNeglectedAvoids risk of   increasing pandemic riskExternalities for other cause areas, e.g., climate, Animal welfare  Overall sense of how promising
Reduce deforestationHighHighModerateModerateHigh

Unclear for animals

Positive for climate

High
Wild animal supply chain and trade regulationHighModerateLowModerateHighNeutral to slightly positiveModerate
Education and/or regulation or biosecurity on farmsModerateModerateLow

High


 

HighNeutral to slightly positiveModerate
Vaccination of animalsLowLowModerate HighHighNeutral to slightly positiveLow
Better detection at high-risk human-animal interfaceModerateLowLow- ModerateLowModerateSlightly negativeLow



Unfortunately the full report is not quite ready for publication. Hopefully the full report will be available soon.

 

Or $ per tCO2 were from two sources:

Sorry that we missed your estimate.

 

We didn’t look into gene synthesis risks so might have missed something there. Although potentially a charity working on reducing dual use research could play a role in limiting these risks.

I would have to check this with Akhil, the lead author, but my understanding is that this CEA compares a case where the PASTEUR act passes with a business as usual case where very few (but not zero) new anitbiotics are developed. 

I agree this is probably overly-optimistic as we can and probably should assume that someone is likely to do something about antibiotic resistance in the next few decades. Good spot!

And thank you for the great questions and for looking over things in such detail.

Hi Ben, happy to justify this. I was responsible for the alternate estimate of 10%-17.5%

– –

These numbers here are consistent with our other estimates of policy change. Other estimates were easier (but not easy) to justify as they were in better evidenced areas and tended to range between 5% and 40% (see 2022 ideas here). Areas with the best data were road safety policy, where we looked at looked at 84 case studies finding a 48% chance of policy success, and food fortification policy, where we looked at 62 case studies (in Annex) with a 47% chance of success – we scaled these numbers downwards but they are not low.

Other EA estimates are also in this ballpark – e.g. Nuno's estimates on OpenPhil criminal justice reform are in the same ball park and give 7-50% and 1-10% chance of policy change success. Also personal experience suggests reasonably high numbers too – CE has one long-running policy charity (LEEP) and it seems to have been pretty successful, driving policy change within 6 months. My own experience was also fairly successful. I think this allows us to put relatively high priors on policy change of 5 to 20 percentage points increase.

That said my 7.5 percentage points increase here was guesswork. I wish I had more time to look into it but it was driven by intuitions based of case studies and experience, that is mostly not US based. I would be open to hearing from experienced US policy advocates that it is too high (or too low).

– – 

On the articles you link to:

The informational lobbying post estimates that lobbying raises the chance of policy change from "very low" to 2.5%. I think this is consistent with a raise from 10% to 17.5%.

  • My view on policy (based on working in the field) is that the most impact per $ comes when advocating for a policy issue that might just happen anyway – e.g. advocating for something where the political door is ajar and it just needs a nudge. I think moving the needle from 15%-40% is about as easy as moving the needle from 1%-5%. 

I am unfamiliar with the LSE study and will have to have a look at it. Maybe it will lead me to be more pessimistic.

– – 

Note 1: I work for CE. Note 2: I think policy change in the animals welfare space is a bit different so assume I am not talking about animal welfare work in any of the above.
 

Load more