Hide table of contents

Fortify Health appears to have received a new, $1-million GiveWell Incubation Grant in June 2019 (distinct from the ~$300,000 grant it received in June 2018). If I understand correctly, CEA’s Global Health and Development Fund has been the default funding source for GiveWell Incubation Grants since at least around that time. Was it the source of Fortify Health’s grant? (Fortify Health's blog post suggests that it was.) If so, is there any particular reason that isn’t reflected on the EA Funds website?

New Answer
New Comment


1 Answers sorted by

Hi HStencil, Catherine from GiveWell here—you're right that the grant was made from the EA Fund for Global Health and Development. Our page publishing process can take a long time, so we haven't yet published our write-up on the grant on GiveWell.org, but we're planning to in the future. We expect that information to be shared on the EA Fund page once it is published.

Thanks HStencil for flagging this. As Catherine said, the process of publishing reports can take some time, which is why there's been a delay adding these grants to the EA Funds website. However in the interests of transparency I've added placeholder payout reports for both the Fortify Health grant, as well as another recent grant to One for the World which is also waiting on the full report. We'll update these reports as soon as GiveWell has completed their publication process.

Thanks for responding, Catherine and Sam (and also for posting those payout reports on EA Funds). I understand that the process of releasing a comprehensive write-up on each grant may take some time, but to me, it seems better (as policy) to at least let donors know about the existence of new grants within, say, 30 days of their being made than to not disclose them at all for months. I understand there are pitfalls to releasing the information that a grant was made without also explaining the process and justification behind it, but at least as I understand the relevant considerations, the benefits of doing so outweigh the harms.

On a related note, do you know whether the Global Health and Development Fund's balance figure on EA Funds has been accurate since June, even though these grants were not included on the website?

4
SamDeere
Thanks – yeah, I agree, and we should have let donors know about this sooner. The Payout Reports shouldn't affect the Fund Balance, as that number is calculated directly from our accounting system. That said, this means it's subject to some of the vagaries of bookkeeping, which means we ask donors to treat it as an estimate. At the moment we're waiting on a (routine) accounting correction that should be posted once our most recent audit results are finalised, which  unfortunately means that the current figure is somewhat inaccurate. The Payout Report total would have been inaccurate as that's calculated by summing the figures from the published payout reports.

Okay, that makes sense – thanks for explaining.

One other thing: Any chance you or Catherine have an estimate of when we can expect a full write-up on the One for the World grant to be published? I'm curious mostly because it seems like a slightly atypical use of the Global Health and Development Fund (perhaps a better fit for the Meta Fund, from which One for the World received a grant earlier in the year).

1
CatherineHollander
Thanks, HStencil - I've passed your feedback on timing of information sharing to the team for consideration. We hope to publish the One for the World grant write-up soon, but are not sure of the precise timing. I'm glad to share some quick context for why this grant was made through the Global Health and Development Fund. The scope of the fund, as indicated in the "Fund scope" section here (https://app.effectivealtruism.org/funds/global-development), is to support activities whose ultimate purpose is to serve people living in the poorest regions of the world, including by raising additional funds for charities operating in those regions. The One for the World (OFTW) grant fits into this category. (We recently updated that page to make the fund scope clearer). One additional process piece that may be helpful to have in mind: each EA Fund manager has discretion over their own pool of funds, and sources and considers grants independently. It's possible there are grants, like OFTW, that fit into the scope of more than one fund. Part of the discussion around grantmaking is understanding other funding the group expects to receive, so we don't believe there's an issue if a group is supported by multiple EA Funds.
9
CatherineHollander
Hi HStencil, we just published the grant write-up. It is available here: https://www.givewell.org/about/impact/one-for-the-world/october-2019-grant

Hi Catherine,

Thank you for your thoughtful responses and for getting the grant write-up online. After a busy holiday season, I just had a chance to go through it, and I appreciate the rationale provided therein.

I also noticed the update you mentioned to the Global Health and Development Fund’s webpage back in early December. While I’m grateful for the improved clarity with regards to the Fund’s current scope, my memory is that the previous webpage included language that specifically indicated the Fund would only be used to support direct work in global health and development, not movement-building work (e.g. in the section discussing why potential donors might not want to give to the fund). As a donor to the Fund with a strong preference to support direct work over movement-building work, this language was a part of the reason why I decided to support the Fund some time ago. While I am confident that this was not anyone’s intent, an outside observer might well infer that the webpage’s description of the Fund’s scope was updated in the wake of the One for the World grant as a means of shielding that grant from the scrutiny of donors who ha... (read more)

6
SamDeere
Hi HStencil, Thanks for your thoughtful comments here.  Late last year I was working on updating and formalising the scope of each of the EA Funds, and in discussions with Elie and others at GiveWell, we updated the wording of the scope to explicitly include projects that were more indirectly serving the mission of the Fund: A previous version of the page had the following wording on it: The previous text wasn’t intended to rule out donations to global-health focused metacharities, rather it was predicated on the assumption that Elie would be most likely to recommend charities doing direct work, and donors who were looking for a larger multiplier on their global health donations might want to consider other options. Because we previously didn’t have a formal policy ruling grants to meta/indirect projects in or out, our internal assessment was that such grants would be in scope (hence the approval of the grant). However, I can see that this was pretty unclear, and that the text could easily be read as suggesting that the Fund would never make such grants, which could have set donor expectations that were different from our original intention. We should have noticed this discrepancy, and taken it into account by deferring approval of any ‘meta’ grants until after we’d published the more formalised Fund scope – we didn’t, and I want to apologise for that.  If you (or any other donors) would like a refund on donations made to the Fund because you feel you were misinformed about the Fund’s scope, please email funds[at]effectivealtruism[dot]org.
3
HStencil
Hi Sam, thank you so much for explaining all of that — it’s all good to know. I certainly wouldn’t ask you to refund any of my donations (though I do appreciate the offer). There’s just one more thing I’d like to flag. Recently, I noticed the “Scope and Limitations” page on the EA Funds website for the first time, which says it exists in part “to set clear expectations” for donors. The section dedicated to describing the scope of Global Development Fund reads, “The Global Health and Development Fund makes grants that aim to improve the health or economic empowerment of people around the world as effectively as possible,” giving the following as examples of “expected recipients”: It seems to me that the One for the World grant falls outside of the scope of those expected recipients. I understand that the expected recipients list is intended to be non-binding and that “if a Fund’s management team decides that a grant fulfils the Scope/Limitations, and the spirit of the Expected Recipients section, they may recommend the grant.” However, if it’s reasonably likely that the Global Development Fund will make more grants to movement-building organizations down the road, do you think that perhaps the expected recipients list should be updated to reflect that? Finally, the webpage says, “Where a grant is determined to be ambiguous with respect to scope . . ., approval may require additional scrutiny.” If I understand correctly, you now agree that the One for the World grant was “ambiguous with respect to scope,” but on account of your prior understanding of the Fund’s prior scope, you did not feel that way at the time of the grant. Accordingly, I assume that the One for the World grant did not receive additional scrutiny. Is that correct? Thanks again for engaging with me here. I’m grateful for the thought.
2
CatherineHollander
Hi HStencil, Thank you for sharing these concerns. We're sorry that this grant came as a surprise, and that you would prefer that it wasn't made via this EA Fund. Some context on the fund may be helpful in explaining the decision to make this grant. The Centre for Effective Altruism set the original scope of the fund and asked Elie to serve as the manager to recommend grants to the fund. Elie thought that a grant to One for the World may be better in expectation than GiveWell's top charities (the broad mandate for the fund) and staff at the Centre for Effective Altruism communicated to Elie that One for the World was within the scope of the fund. Elie elected to make the grant on that understanding. However, we at GiveWell didn't confirm the language on the now-previous version of the fund page, which we believe said: "You might choose not to support the fund if you think donations to organizations working in Effective Altruism Movement Building will produce more money for highly effective global poverty charities than the money they receive." If we had done that, we would have had more questions about whether the grant was in the scope of the fund; failure to do so was an oversight by us and CEA. Elie appreciates hearing from EA Fund donors about their preferences for allocating funding and would appreciate other donors communicating with him about their interests.
7
HStencil
Thank you for that explanation. I’m glad to hear that the language of the Fund’s previous description would have raised questions at GiveWell about whether the One for the World grant was within the Fund’s scope, had it been on the relevant individuals’ radar at the time. In light of the fact that CEA told Elie the grant was within the Fund’s scope, it’s understandable that the GiveWell team did not pore over the Fund description to double-check CEA’s judgment. While I’m curious about how CEA understood the scope of the fund internally at the time (e.g. is it their view that the scope has changed?), I’m glad that we are all on the same page about it now. I’m also curious about when the GiveWell/CEA teams realized that the old EA Funds webpage’s description of the Fund’s scope might reasonably be read to exclude the One for the World grant. Was that realization the reason why the fund descriptions were updated back in late November/early December? Additionally, I noticed you didn’t comment on the issue of One for the World presenting itself as fully independent of GiveWell when in fact it is highly reliant upon GiveWell for funding. I understand that you, of course, can’t speak for One for the World, but all the same, I think it’s important for this to be addressed. With that in mind, would GiveWell support One for the World in taking steps to clarify the nature of its relationships with GiveWell on its website?
1
CatherineHollander
Hi HStencil, "I’m also curious about when the GiveWell/CEA teams realized that the old EA Funds webpage’s description of the Fund’s scope might reasonably be read to exclude the One for the World grant." We realized this when prompted by your comments here. "With that in mind, would GiveWell support One for the World in taking steps to clarify the nature of its relationships with GiveWell on its website?" We have shared this feedback with One for the World and understand they plan to update their site accordingly.
1
HStencil
Hi Catherine, thanks so much for clarifying that and for passing my feedback on to One for the World. I am thrilled to see that they have now added a new page to their website explaining the nature of their relationship with GiveWell in detail. To my eye, the page does a great job of providing donors with all of the information they might want to have and would be a good model for other organizations confronting similar issues.
4
Jack Lewars
Hi HStencil Thanks for your time in raising the points above. To introduce myself, I am the new Executive Director of One for the World. I think you make some very important points and we have taken action to address several of them. I'm pleased that the thread seems largely to have been resolved positive. However, to respond directly on our own behalf: * We take your point that we could be seen as a marketing investment by GiveWell. I think it slightly understates/misstates our work to suggest that we are a publicity effort or advertising campaign, but I don’t think this is material to your point. Our founders did indeed decide to fundraise for GiveWell’s recommended charities independently when they set up in 2014, although as part of a wider group of charities. We then fully aligned with GiveWell in April 2019 (see blog post here). GiveWell requested that we switch our portfolio to align with their recommendations as part of recommending a grant; we were enthusiastic about making this switch. * While I agree with your points in the main, I think it’s important to note a couple of things. First, while GiveWell does provide ~75% of our funding at present, we are working to diversify our funding, to make sure we can take a balanced view of GiveWell’s work (and as a general risk management strategy). While we consider GiveWell’s research first class at present, as you do, we agree that we need to be able to review this relationship regularly and have backup plans in case we no longer feel comfortable raising only for GiveWell charities or accepting their funding. It’s important to say that GiveWell have in fact encouraged us in this effort, by only granting us 75% of our operating costs for the 2020-21 financial year. They have made it clear to us that a key indicator of success is raising the deficit from elsewhere and that the second year of funding could be withheld if we are unable to do this. * Second, we have tried to be transparent about our financial re
3
HStencil
Hi Jack, thank you so much for your thorough response to my concerns. I have seen the additions to your website, and I think they’re great. I should also note that I think One for the World is doing laudable and important work. I did not intend to suggest otherwise. As you say, I believe you “could be seen” as a publicity effort for GiveWell, but I certainly do not believe that characterization accurately captures the full scope of your activities or of your role in the broader EA ecosystem. On a similar note, I apologize for missing the acknowledgements of your financial relationship with GiveWell in the blog posts you mentioned and in your 2018 annual report. I admit I simply was not looking that hard for disclosures – I just browsed what I took to be the main pages of your website. I am thrilled to see that these pages now feature a similar (or greater) level of transparency. Finally, I am glad to hear that you are engaged in efforts to reduce your reliance on GiveWell for funding and that GiveWell is supporting you in those efforts. That strikes me as an excellent best practice. Thanks again for your response, for the changes, and for all of the great work you’re doing at One for the World.
3
Ben Pace
You included a full-stop at the end of the link, so it goes to a broken page ;)
2
Aaron Gertler 🔸
Fixed.
2
AnonymousEAForumAccount
Thanks for explaining how this works Sam. I’ve got a few followup questions about fund balances. The animal and meta fund pages both show new grant reports with November payout dates- are these grants reflected in the fund balances? Both funds have end of November fund balances that are the same ballpark size as their November grants, suggesting they might not be updated. This makes sense. Generally speaking, how accurate should we expect those estimates to be? Is it possible to say something along the lines of “we expect the fund balance estimates to be accurate plus or minus 10% and generally not off by more than $100,000”?
Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Recent opportunities in Community
46
Ivan Burduk
· · 2m read