Hide table of contents

Introduction

2022 was a year of growth for Giving Green. We doubled our team, added three new top climate nonprofit recommendations, and developed a comprehensive corporate climate strategy for businesses looking to take their climate action to the next level.

But as excited as we are about the positive impact we believe this growth will unlock, we are constantly on the lookout for ways we can improve our research. At the start of 2023, we took a step back to identify ways we can do better. Three review meetings, four spreadsheets, and 19,000 words later, we have a few ideas about ways we want to improve.[1] See below for three examples.

1. Early-stage prioritization

In 2022, we published an overview of the steps we take and the criteria we use to prioritize our research. However, we think our overview was overly simplistic and did not provide enough insight into topics we actually looked into.[2] For example, we think our deep dive on nuclear power made a reasonable case for why support efforts could be highly cost-effective, but did not provide any explanation of why we thought it was more promising than similar geothermal efforts, or whether we even looked into geothermal at all.

To help address this, we are creating a public-facing dashboard that will share more detail on how we prioritize topics and what we have considered. This embodies our values of transparency and collaboration, and we are hopeful it will help others better understand and engage with us on our process.[3]

2. Cost-effectiveness analyses

We often use cost-effectiveness analyses (CEAs) as an input into our assessment of cost-effectiveness. However, many of the opportunities we view as most promising also have highly uncertain inputs.[4] Because of this, many of our CEAs primarily served as a way to (a) identify the parameters that affect how much a donation might reduce climate change and (b) assess whether it is plausible that a donation could be highly cost-effective.[5] For example, our Good Food Institute CEA helped us delineate two pathways by which the Good Food Institute might accelerate alternative protein innovation, and estimate that it’s possible a donation is within the range of cost-effectiveness we would consider for a top recommendation.[6]

One of our core values is truth-seeking.[7] A CEA is one of the many tools in our toolbox, but we want to see whether it is possible to make it more useful. We are speaking with academics, researchers, and other organizations to consider ways to reframe our CEAs and/or increase the accuracy of inputs. We also plan to make revisions to how we are communicating about when and how we use CEAs, in order to help readers better understand what we can (and cannot) learn from them.

3. External feedback

We are a small team that relies heavily on the expertise of others to guide our focus and critique our work. In the second half of 2022, alone, we had around 110 calls with various climate researchers, foundations, and organizations.[8] However, we were not always methodic about when we sought feedback, from whom we sought feedback, and how we weighted that feedback relative to other inputs.

Though we think it is important to remain flexible, we are drafting guidelines to help increase the consistency of our approach to feedback.[9] We also plan to introduce a more formal external review step for our flagship research products.[10] As part of our commitment to our value of humility, we are especially keen to ensure we receive a diversity of feedback and proactively engage with stakeholders who may have different or contrary views to our own.[11]

It is all uphill from here

Identifying issues is much simpler than crafting solutions, but we are excited for what lies ahead and look forward to improving our research to maximize our impact. If you have any questions or comments, we are always open to feedback. Otherwise, stay tuned for more!

  1. ^

    Actual meeting count is probably higher, but we include here any research meeting in which we specifically focused on reviewing our 2022 research process and products. Spreadsheets included compilations for content-specific/general issues and content-specific/general improvement ideas. The 19,000 word count is based on the content of the four spreadsheets.

  2. ^

    For example, our six-step “funnel” process describes the most formal and methodical way in which we initially seek to identify promising funding opportunities. In practice, we also use other approaches to add opportunities to our pipeline, such as speaking with climate philanthropists or reviewing new academic publications.

  3. ^

    See About Giving Green, “Our values” section.

  1. ^

    We think this is most likely the case for two main reasons: (1) many climate funders explicitly or implicitly value certainty in their giving decisions, so this means less-certain funding opportunities are relatively underfunded; and (2) we think some of the most promising pathways to scale (e.g., policy influence and technology innovation) are also inherently difficult to assess due to their long and complicated causal paths.

  2. ^

    We use rough benchmarks as a way to compare the cost-effectiveness of different giving opportunities. As a loose benchmark for our top recommendations, we use Clean Air Task Force (CATF), a climate policy organization we currently include as one of our top recommendations. We think it may cost CATF around $1 per ton of CO2 equivalent greenhouse gas avoided/removed, and that it serves as a useful benchmark due to its relatively affordable and calculable effects. However, this cost-effectiveness estimate includes subjective guess parameters and should not be taken literally. Instead, we use this benchmark to assess whether a giving opportunity could plausibly be within the range of cost-effectiveness we would consider for a top recommendation. As a heuristic, we consider an opportunity if its estimated cost-effectiveness is within an order of magnitude of $1/tCO2e (i.e., less than $10/tCO2e). For additional information, see our CATF report.

  3. ^

    Pathways: see [published] Good Food Institute (GFI) CEA, 2022-09-14, rows 10-14. Top recommendation cost-effectiveness: We use rough benchmarks as a way to compare the cost-effectiveness of different giving opportunities. As a loose benchmark for our top recommendations, we use Clean Air Task Force (CATF), a climate policy organization we currently include as one of our top recommendations. We think it may cost CATF around $1 per ton of CO2 equivalent greenhouse gas avoided/removed, and that it serves as a useful benchmark due to its relatively affordable and calculable effects. However, this cost-effectiveness estimate includes subjective guess parameters and should not be taken literally. Instead, we use this benchmark to assess whether a giving opportunity could plausibly be within the range of cost-effectiveness we would consider for a top recommendation. As a heuristic, we consider an opportunity if its estimated cost-effectiveness is within an order of magnitude of $1/tCO2e (i.e., less than $10/tCO2e). For additional information, see our CATF report.

  4. ^

    See About Giving Green, “Our values” section.

  5. ^

    Estimate based on counting internal date-stamped call note files.

  6. ^

    These guidelines include suggestions for when to seek external feedback, who to ask for external feedback, and the types of questions/feedback we should expect to value from different inputs.

  7. ^

    For example, we may have a professor specializing in grid technology review a deep dive report on long-term energy storage. We may also formalize the ways in which we ask Giving Green advisors for input.

  8. ^

    See About Giving Green, “Our values” section.

Show all footnotes

30

0
0

Reactions

0
0

More posts like this

Comments2


Sorted by Click to highlight new comments since:

Thank you for the update. Improvement is always interesting, especially about Cost-effectiveness analyses. :)

Agreed, Felix! Thank for reading our updates.

Curated and popular this week
Jim Chapman
 ·  · 12m read
 · 
By Jim Chapman, Linkedin. TL;DR: In 2023, I was a 57-year-old urban planning consultant and non-profit professional with 30 years of leadership experience. After talking with my son about rationality, effective altruism, and AI risks, I decided to pursue a pivot to existential risk reduction work. The last time I had to apply for a job was in 1994. By the end of 2024, I had spent ~740 hours on courses, conferences, meetings with ~140 people, and 21 job applications. I hope that by sharing my experiences, you can gain practical insights, inspiration, and resources to navigate your career transition, especially for those who are later in their career and interested in making an impact in similar fields. I share my experience in 5 sections - sparks, take stock, start, do, meta-learnings, and next steps. [Note - as of 03/05/2025, I am still pursuing my career shift.] Sparks – 2022 During a Saturday bike ride, I admitted to my son, “No, I haven’t heard of effective altruism.” On another ride, I told him, “I'm glad you’re attending the EAGx Berkely conference." Some other time, I said, "Harry Potter and Methods of Rationality sounds interesting. I'll check it out." While playing table tennis, I asked, "What do you mean ChatGPT can't do math? No calculator? Next token prediction?" Around tax-filing time, I responded, "You really think retirement planning is out the window? That only 1 of 2 artificial intelligence futures occurs – humans flourish in a post-scarcity world or humans lose?" These conversations intrigued and concerned me. After many more conversations about rationality, EA, AI risks, and being ready for something new and more impactful, I decided to pivot my career to address my growing concerns about existential risk, particularly AI-related. I am very grateful for those conversations because without them, I am highly confident I would not have spent the last year+ doing that. Take Stock - 2023 I am very concerned about existential risk cause areas in ge
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr