Thanks for doing this!
As a climate person trying to have a balanced perspective on this, to me the framing of climate here does not come across as very balanced. @John G. Halstead might have more detailed comments on this, but it seems that examples are selectively chosen in one direction (motivating the severity of the risk).
I am extremely pro alternative proteins (see e.g. here) but I think we still need to be more honest about the climate impacts of agriculture, both in terms of epistemic hygiene but also in terms of argumentative strategy (I don’t think we need to exaggerate the case for APs – the case is good already! – and by exaggerating some claims we are making the whole thing less believable).
In the beginning of the interview it is discussed as a huge, huge contributor to climate change, a major driver, without presenting any numbers.
The exact numbers would depend on the choice of global warming potential (essentially: what timeframe of warming impact to care about?), I discuss this in a bit more detail here, but on typical metrics the impact of animal agriculture would be something like ~15%, see e.g. here from OWID; there’s arguments for both more optimistic and more pessimistic (caring less about short-term warming than the underlying OWID data) numbers, but I think 15% seems pretty okay as a prior before weighing them all in detail:
This is quite significant but if I listen to the interview and to similar messaging I would be surprised it is “only” 15%. I think it would be more honest and more robust if we said something like “alternative proteins are a promising strategy for an otherwise hard-to-decarbonize sector” (which is very exciting, few hard-to-decarbonize sectors have such promising technological solutions!) but not to suggest that it is anywhere close to the importance of transforming our energy system (~75/15 - a 5x difference).
Thanks so much! I know the problem of late answers :)
I think even for something that seems quite certain on the intervention level (if you think that is true for malaria vaccine) then one needs to account for funding and activity additionality which make this more uncertain and, relatively speaking, lowers the estimate to GD where the large size of the funding gap ensures funding and activity additionality to be near 1 (i.e. no discount).
Given that Open Philanthropy seems to believe that typical GiveWell recommendations are dominated by more leveraged ones (e.g.using advocacy, induced technological change) at least for risk-neutral donors, I am a bit confused by the anchoring on GiveWell charities.
Even if GD were closer to AMF than GiveWell thinks, this would not put GD close to the best thing one can do to improve human welfare unless one applies a very narrow frame (risk-aversion, highly scalable based on existing charities right now).
Or, put a bit differently:
Thanks for doing this and kudos for publishing results that are in tension with your (occasional) employer.
Interesting to see a clear statement by OP on the expected dominance of advocacy and other leveraged interventions over traditional direct delivery work.
(Full disclosure: I sometimes work out of the same coworking space as Justus and Vegard and we occassionally have team lunches.
Given that they were a potential grantee for some time (and indeed became a grantee for a small grant in 2023), I've avoided further socializing beyond those office contexts. They also don't know I am writing this.)
This is an exciting broadening of work!
I haven't always agreed with the underlying theory of change of the climate work, but I've consistently experienced the team of Future Matters as quite thoughtful about policy change and social movements and cultivating an expertise that is quite rare in EA and seems underprovided.
I think the idea of an energy descent is extremely far outside the expert consensus on the topic, as Robin discusses at length in his replies to that post.
This is nothing we need to worry about.
Thanks for the good exchange -- that all makes sense.
I am unsure whether we disagree on learning rates for SMRs, we are just in the process of building a comparative tool to clarify our expectations of the returns of different innovation advocacy bets and, IIRC, SMRs sit in the middle range there based on stuff like Mahotra and Schmidt (2020?, from memory) on design complexity and customization and how this shapes expectable learning rates.
We'll publish this later in the fall and then we'll see whether we disagree:).