Executive Summary
SoGive ran a pilot grants program, and we ended up granting a total of £223k to 6 projects (see the section “Our grantee payouts” for details). We were pleased to be able to give high quality feedback to all rejected applicants (which appears to be a gap in the grants market), and were also able to help some projects make tweaks such that we could make a positive decision to fund them. We also noted that despite explicitly encouraging biosecurity projects we only received one application in this field. We tracked our initial impressions of grants and found that the initial video call with applicants was discriminatory and helped unearth lots of decision-relevant information, but that a second video call didn’t change our evaluations very much. Given we added value to the grants market by providing feedback to all applicants, helped some candidates tweak their project proposal and identified 6 promising applications to direct funding towards, we would run SoGive Grants again next year, but perform a more light touch approach, assuming that the donors we work with agree to this, or new ones opt to contribute to the pool of funding. This report also includes thoughts on how we might improve the questions asked in the application form (which currently mirrors the application form used by EA Funds).
Introduction
Back in April SoGive launched our first ever applied-for granting program, this program has now wrapped up and this post sets out our experiences and lessons learned. For those of you not familiar with SoGive we’re an EA-aligned research organisation and think tank.
This post will cover:
- Summary of the SoGive Grants program
- Advice to grant applicants
- Reflections on our evaluation process and criteria
- Advice for people considering running their own grants program
- Our grantee payouts
We’d like to say a huge thank you to all of the SoGive team who helped with this project, and also to the external advisors who offered their time and expertise. Also, as discussed in the report we referred back to a lot of publicly posted EA material (typically from the EA forum) so for those individuals and organisations who take the time to write up their views and considerations online it is incredibly helpful and it affects real world decisions - thank you.
If any potential donors reading this want their funding to contribute to the funding pool for the next round of SoGive grants, then please get in touch (isobel@sogive.org).
1. Summary of the SoGive Grants program
- Why run a grants program?
- Even at the start of 2022 when funding conditions were more favourable, we believed that another funding body would be valuable. This reflected our view of the value of there being more vetting capacity. We also thought Joey made a valuable contribution in this post.
- Part of our work at SoGive involves advising high net worth individuals, and as such we generally scope out opportunities for high impact giving, so in order to find the highest impact donation opportunities we decided to formalise and open up this process. Prior to SoGive grants, we have tended to guide funds towards organisations that Founders Pledge, Open Phil, and GiveWell might also recommend along with interventions that SoGive has specifically researched or investigated for our donors. Especially in the case of following Open Phil’s grants, we had doubts that this was the highest impact donation advice we could offer, since Open Phil makes no guarantee that a grantee still has room for more funding (after receiving a grant from Open Phil).
- We also noticed a gap in the market for a granting program that provided high quality feedback (or, for some applicants, any feedback at all) to applicants. This seems like an important piece of information that is missing, and could plausibly help lots of EAs make better plans.
- We've also heard it said more than once that it makes sense for smaller organisations to fund smaller scale things and then the bigger organisations can pick them up as they grow. We didn’t run the program to test this hypothesis, but given the proliferation of regranting programs seen in 2022 it seems like consensus might be starting to form around this view.
- How were the grants funded?
- Grants were funded by a few private donors. Most of the donors that SoGive works with are people earning to give working in finance roles (not primarily crypto-related) and this is consistent with sources of funding for this round of SoGive grants. Most of the money came from one person who works for a major well-known investment bank.
- Who was eligible?
- You can see our application form here, but we were open to any projects that looked high-impact from an EA lens (apart from AI Safety research as we perceived there to be a strong supply of funding for (early stage or more established) AI alignment projects and we don’t believe this is our area of comparative advantage.
- We particularly encouraged applicants focusing on:
- Biosecurity/pandemic risk, especially those applications which cover “alternative” (i.e. not technical) ways of reducing pandemic risk; technical biosecurity (e.g. funding biologists to work on biosecurity) is also covered by other funders (e.g. the Open Philanthropy Biosecurity Scholarships).
- Climate change, especially in ways that involve keeping fossil fuels in the ground.
- Research or policy work that enables the world to improve, preferably dramatically; research and policy work which appears effective through a longtermist lens is more likely to be viewed positively, although we may also consider neartermist work in this vein if there is a strong reason to believe that the work is neglected and high impact.
- Who applied?
- Of the 26 grants that applied, they could be classed under the rough categories below.
- One interesting update was the lack of biosecurity projects we saw, despite explicitly encouraging them in applications.
Meta/EA Community Building | 8 |
Public policy/improving governance | 8 |
Hard to categorise | 3 |
Existential risk (multiple causes) | 3 |
Climate change | 2 |
Biosecurity | 1 |
Nuclear weapons | 1 |
Total | 26 |
- How did the program run?
- Application form: We started with a relatively light touch grant application form (similar to the EAIF form to reduce the burden on applicants).
- Video call 1: After some initial filtering we then conducted video calls with the most promising applicants.
- This involved asking questions about the history of the project and its current status and then some more in-depth questioning on their theory of change, the status of the problem-area field and other efforts to tackle the same problem, their perceived likelihood of success, worst-case scenarios, counterfactuals (for both the projects trajectory, and the applicants time), and the amount of money asked for (and which parts of the project they would prioritise). If we ran grants again, we would also ask applicants to steelman the best case scenario for their application in the video call.
- SoGive meeting 1: Then we had a SoGive team meeting to discuss the applicants and key cruxes etc. This allowed the wider team to share their intuitions and knowledge of the proposed interventions and dig into cruxes which would help determine whether projects were worth funding or not.
- Video call 2: From there we conducted further research before another round of video calls to give applicants the chance to address particular concerns and discuss more collaboratively how the projects could be tweaked to be more successful.
- SoGive meeting 2: Then we had another SoGive wide team meeting to discuss applications again, and make final recommendations.
- Data on how applications progressed through the process is listed below. For a more detailed evaluation of this process please see Annex 1. The most decision relevant element of the evaluation process was the post-call 1 multivariable ratings, and we found that the second video call added relatively little value. As such if we ran the grants again we would probably only conduct the initial video call with applicants.
- 26 initial applications (£1.98m)
- 18 applications (£1.46m) took part in a round 1 video call.
- 7 applications (£594k) progressed to round 2 and took part in another video call.
- 6 applications (£265k-£495k) met our bar for funding (SoGive Gold Standard).[1]
- How did you decide who to award money to?
- See the section “Reflections on our evaluation process” below but in general it was a relatively large team effort to ensure breadth of opinions and worldviews.
2. Advice to grant applicants
- Theories of change
- A surprising number of applicants proposed relatively promising projects on topics of great importance and/or neglectedness. But they didn’t include proper theories of change, for example how their highly academic research might eventually result in behaviour or policy change.
- We didn’t expect projects to be executing on all the steps within the theory of change and, we were relatively open to funding things that had longer timelines on their theories of change or projects that were only influencing the inputs to outputs phase and not the outputs to outcomes phase. However, some acknowledgement of the potential pitfalls and required actions (even if carried out by someone else) on the path to impact would have been reassuring. It made us nervous when applicants didn’t seem to be conscious of where in the theory of change their project sat, and it suggested to us they hadn’t considered their strategy sufficiently.
- To make your application (and project) even stronger, applicants should, after considering their theory of change, identify their allies in their plan for impact and begin building relationships with them. Another useful exercise is once applicants have identified areas where their project has weaknesses, to then identify advisors who they could bring in to help with those aspects.
- General lack of details.
- Simple things like putting expected FTE hours for each step of a project, and or anticipated deadlines would reassure us that an applicant is capable of executing on the project.
- Beware surprising and suspicious convergence. It felt confusing when it just so happened that a person with a background in A discovered that using the methodology dominant in field A was the answer to all of our problems in field B etc. It would be better to be honest and explain that you’re starting using tools from field A because you already have expertise and network in it, rather than leaving grantmakers doing endless research trying to understand how methodology A came out as the most impactful approach to the problem at hand.
- Downside risk
- There’s a school of grantmaking thought which states that as the expected impact of longtermist projects is very hard to estimate (and confidence intervals on any estimate are very large), that the best you can do is to fund projects run by people who execute very well, and where it is believed that downside risks can be effectively managed. People with these views tend to be longtermists who place a high value on the far future, and therefore, if you can make human extinction e.g. 10^-6 % to 10^-5 % less likely, that (under certain assumptions) would be hugely high impact[2]. They also believe that it's very hard to fully understand how your actions are going to affect the far future. Therefore, together this means that as long as work is going to have any impact on the far future, it's usually the sign (of the expected value) that matters more than the quantum (because if it succeeds in having the right sign, it's more or less bound to exceed the SoGive Gold Standard).
- Too few applicants had spent serious time considering these downside risks, or if they had, failed to explicitly address these in their applications.
- Are you the right fit?
- There were a few projects that contained promising ideas but the project lead wasn’t quite the right fit for a variety of reasons (not enough time to commit, wrong skillset to ensure success of the project, lack of necessary network in the problem field, etc).
- Ask yourself seriously: would you be open to taking on new team members and considering which alternative roles on the project would fit you better?
3. Reflections on our evaluation process and criteria
- We used a relatively heavy touch process and spent a good chunk of time researching and evaluating grants we didn’t initially think were super promising. We tracked our perceptions of grants over time, to see how much our initial impressions changed upon further research and conversation with grant applicants, and in general found they didn’t shift too much with further research.
- Please see Appendix A for a more detailed evaluation of our evaluation process
- We used the SoGive Gold Standard as the bar for whether or not we’d recommend funding a grant. It’s explained in more detail here, but essentially the bar is £5000 or less to save a life.
- We used an ITN+ framework to evaluate grants, with the plus including considerations around delivery of the project and information value of the project.
- Each project was given a score on each of the considerations (using a grading table for consistency and with each consideration weighted slightly to account for its perceived importance). Although we totalled these scores at the end we mainly used these scores as an input to a holistic assessment, rather than as the deciding factor. At each round in the process, we took the scores we had assigned in the previous round and updated them. Scores were not used as a 'blind' tool, but rather as a useful input in a discussion about which projects to take forward.
- When thinking about the perceived risks in the delivery of the proposed projects we considered how binary or stepped success is, and you can see an extract from our scoring system here:
- “Delivery = This is your confidence level in how likely you think the applicant is to achieve their proposed outputs, and deliver on what they promise to. Here you should take into account:
- The perceived competence of the applicants (e.g. track record/general impression)
- How well thought out the project proposal, success metrics and timelines etc are
- How stepped or linear success is, for example will delivering on 80% of the project proposals mean 80% of the expected value is achieved, or is it a binary impact and by failing to deliver on any fronts the project will only have 20% impact (or even 0% impact)”
- The range of possible scores went from 0 to 8, and the average delivery score for projects that didn't get funded was 4.2 (actual scores ranging from 2 to 8) compared to an average of 5.39 for those projects which did get funded (actual scores ranging from 3 to 8) and all funded projects.
- When rating projects for delivery we mainly relied on evidence of the applicants success in other domains, our intuitions about how well thought through their theory of change was, and evidence on the success/failure of other similar projects (and what could be learnt from them). Evaluating how binary success might be for each project was reasonably easy, although it required some understanding of specific domains. For example, we had to think about how important credentials are in field X when promoting certain ideas, and whether or not the only way to influence actions in field Y is through specific policy decisions, rather than just raising awareness of an issue.
- In terms of decision relevance: explicitly thinking about delivery in our discussions didn’t seem to change our opinions much, as we’d generally incorporated thinking about likelihood of success and what different outcomes might look like into our overall perception of the projects. However, being able to see that we did fund projects where we had a lower expectation of successful delivery may provide some confidence that we weren’t too risk averse as funders.
- “Delivery = This is your confidence level in how likely you think the applicant is to achieve their proposed outputs, and deliver on what they promise to. Here you should take into account:
- When thinking about information value we considered the information value to both us and the wider EA community. This assumes that there are feedback loops, but even if there aren’t there will be some gain from the learning around implementation techniques (e.g. it will likely help to map out the problem space further and highlight the feasibility of certain projects, even if we can’t measure the success of the project very clearly). You can see an extract from our scoring system here:
- “How useful this information would be for other contexts
- For example, generating new data on the question of how many people are interested in EA-style thinking (through something like the SoGive core hypothesis[3]) is useful beyond just charitable giving, but also for EA community building efforts (so we know how broad the pool of potential EA-interested people really is).
- Gathering information about the cost-effectiveness of radio messaging in country X is plausibly useful for a variety of global health and wellbeing interventions in country X (and also adds data to the question of how likely is radio messaging to be cost effective in another country e.g. country Y).
- Feedback loops, e.g. how long the path to impact is (e.g. fail fast type considerations) and how well-defined the concept of impact is
- Interventions aimed at targeting career pathways (e.g. 80k or Effective Thesis) have longer feedback loops and many confounding variables (although these can be somewhat overcome using high quality surveying and tracking).”
- The range of possible scores went from 0 to 6, and the average information value score for projects that didn't get funded was 3.2 (actual scores ranging from 0 to 6) compared to an average of 3.62 for those projects which did get funded (actual scores ranging from 2 to 5).
- When rating projects for information value, we considered both factors mentioned above and also considered how integral measurement and communication of results seemed to be within the project's plans.
- In terms of decision relevance: explicitly thinking about information value in our discussions was useful, but given we were aiming to evaluate the cost-effectiveness of the intervention as it stood and not the potential cost-effectiveness of the intervention conditional on other interventions utilising the information gain (as this would require greater knowledge of the likely behaviour of a whole group of people/organisations working a particular sub-field) we didn’t factor information value very heavily in our decision-making. This is also reflected in the fact that information value was given a smaller weighting than other considerations.
- “How useful this information would be for other contexts
- SoGive has essentially been employing worldview diversification for some years now, in that our research and grantmaking have been focused on a mixture of things, including longtermist and global health and wellbeing (aka “neartermist”) worldviews. One of our learnings from this round was that we had not clearly defined in advance how this applied to our grant round, and this made some of our decisions harder. For example our worldview diversification approach could have meant our grants are spread across different worldviews. Alternatively, we might have decided that since part of the motivation was to respond to the need for more longtermist grantmaking opportunities in particular, our grantmaking should be biased (wholly?) in a longtermist direction. In the end we decided not to actively introduce this bias, however our decision-making at the time was harder because of this.
- We referred back to useful discussions of other EAs’ benchmarks for $s to reduction in existential catastrophe (see the question and comments here for example), although this inevitably devolved into any vaguely plausible x-risk affecting projects exceeding our SoGive Gold Standard benchmark dramatically.
- Scrutinising theories of change across the board will help rule out more projects but won’t solve the longtermist vs neartermist problem, although we might expect ideas like delivery confidence (binary vs. step change) and information value to weight more heavily in favour of neartermist projects on average.
Would we run SoGive Grants again?
- This view is still tentative; at the time of writing we have not yet liaised with the donors we work with.
- Our tentative view is that we would run this again next year, but perform a slightly more light touch approach, given that the extra time spent assessing applications did not seem to add much extra value.
4. Advice for people considering running their own grants program
- Questions we wish we’d asked in the application form. We chose to have an application form which was identical to that of EA Funds. This was a deliberate choice – we wanted to make life easier for applicants. However the following questions came up in every call/conversation, so we could have saved ourselves some time if these questions had been asked from the outset:
- Asking for an explicit theory of change
- We found that asking questions about the metrics projects would use to evaluate whether or not they should close the project or judge themselves to have succeeded to be quite illuminating.
- Asking about downside risk
- Asking for an explicit theory of change
- Giving high quality feedback seems like a good and useful thing to do; it helps talented individuals update and re-route on the parts of their projects that are the weakest. A fair few applicants thanked us for providing feedback and noted that funders rarely provide any feedback at all.
- We committed early on to trying to provide as many applicants as possible with honest and high quality feedback as to why we weren’t funding them, and indeed did provide feedback to all applicants. We understand why most grantmakers don't do this. It’s quite challenging to give honest feedback without offending and it takes a lot of emotional energy (EA is a small community and nobody wants to make enemies). It might also create the expectation of a continued conversation around the funding decision that funders might not have the time or energy to engage in.
- One way to reduce the load on funders might be to decide to send relatively honest feedback but not engage in further discussion, e.g. make a blanket rule of not responding to the email after the feedback. This is what I (Isobel) ended up doing and I felt it reduced the pressure on me.
- As a funder you can take a more “active” or “passive” approach. An active approach means that you are more actively involved in supporting or advising the organisation that you fund, e.g. through informal mentoring or a formal position on the board. An active approach can be valuable. For example, it can enable you to fund projects which you believe to fall slightly below the bar for funding, if you believe that extra support could help to bring the project above the bar. In deciding how active or passive to be, you should consider:
- How much time do you have, and how much time do you expect to have in the future? What’s the counterfactual use of your time?
- Do you really have the skills to genuinely improve a project through your advice? It can be very tempting to believe that you do, however there are several biases which could be driving this. In particular, as a funder you will be in a position where it’s hard for people to tell you that you’re not adding value, so you could persist in being unhelpful for a long time without knowing it.
- For these reasons we ended up mostly taking a passive approach, which was unfortunate, as several applications had enormous merit, but still felt short of the bar for funding.
- That said, we were able to take a somewhat more active approach in some cases. This was typically for projects where SoGive could advise strategically, or in domains where we have expertise e.g. in charity evaluation and research methods).
- You might also want to add some considerations around scalability of projects, as if you have a very large amount of funding to disburse you might favour projects that can scale well (as you will save greatly on evaluation costs in the future)). For SoGive this wasn’t really a consideration.
- Our applications included more projects than we anticipated which were essentially “tooling up”. By “tooling up” we mean projects that improve the tools available to everyone, for example projects that allow you to build better social movements or improve productivity. Arguably most of the benefits of these projects derive from their ability to help people who are doing high impact work to make the world better. Indeed it’s possible for such tools to be used by those whose work is harmful.
- When thinking about whether or not it was only value-aligned actors benefiting, we tended to be relatively lenient on this consideration in cases where the tooling up work was fairly "direct" or "bilateral" (e.g. providing a service such as consulting) -- given that for profit companies often have to work hard to chase sales for such services, we expect that such applicants would have been unlikely to get clients accidentally. This leniency was based also on the view that such applicants were likely to be able to help other people from within the EA community, and not “accidentally” gain clients from elsewhere. Implicit in this is an assumption that other people within the EA community are people whom we would be happy to see helped. In light of recent revelations relating to FTX, we have not yet reviewed this assumption. In cases where the “bilaterality” assumption didn’t apply, it became a more material consideration.
- For an existing detailed description of this problem read the “Innovation Station'' section of Zvi’s post on their experience of being a recommender for the Survival and Flourishing Fund.
- One of the considerations in the SoGive criteria is the question: "Is philanthropy the right way to fund this work" -- grantors would do well to consider this question. Arguments in favour of leniency include the fact that the EA ecosystem for impact investing is not currently well developed (despite some attempts to remedy this). Counterarguments include the fact that part of the reason why this ecosystem doesn't exist could be that philanthropy has crowded it out.
5. Our grantee payouts
Below are listed the 6 grantees we recommended (listed with their permission and in alphabetical order).
Doebem
- This is a Brazilian effective giving platform, you can find their website here. We recommended they be given £35,000 to continue the professionalisation and scaling up of their work. We are cautiously excited about their giving platform, which if successful has the potential for a substantial multiplier (for example see Effektiv Spenden - a German organisation doing similar work who have reported a multiplier of 11-91). In our view, there were execution risks around their strategy as a nascent organisation; and we also felt that their research and evaluation criteria had gaps. As such, we decided to take on a more consultative role in helping them improve and expand their local charity analysis and research. We believe that the support and advice they will receive, both from ourselves and from other groups in the EA community, was sufficient to give us confidence that they were worth funding.
Effective Institutions Project
- This is a global working group that seeks out and incubates high-impact strategies to improve institutional decision-making around the world; you can find their website here. We recommended they be given £62,000 for their EIP Innovation Fund (their regranting program) aimed at discovering and supporting excellent initiatives to increase global institutional effectiveness. The rationale for this grant was primarily about the network building benefits it would provide, allowing EIP to better search for high impact opportunities in the institutional decision making space. While we believe that this grant will help them to build a strong network, we encourage EIP to communicate more details about their strategy for identifying opportunities and criteria for regranting the funds.
Founders Pledge
- Founders Pledge is a global nonprofit empowering entrepreneurs to do the most good possible with their charitable giving. They equip their community members with everything needed to maximise their impact, from evidence-led research and advice on the world's most pressing problems to a comprehensive infrastructure for global grant-making, alongside opportunities to learn and connect. We recommended they be awarded £93,000 to hire an additional climate researcher based on their existing work being high quality (see here and here). Furthermore, we have had significant interactions with Johannes Ackva (who leads this work) and the FP team over the years, and this helped us to gain confidence in the quality of their research. Our confidence was enhanced by hearing evidence from organisations recommended by Founders Pledge that their research had directed substantial amounts of money towards them.
Jack Davies
- Jack is a researcher; we recommended he be given £30,000 to run a research project embedded in GCRI (Global Catastrophic Risk Institute) trying to improve and expand upon a methodology for scanning for unknown and neglected existential risks. We felt that Jack has the relatively rare mix of qualities necessary to push through more entrepreneurial style academia, as well as being a competent researcher. We believed that the concept was the most exciting component of this application; we judge the field of existential risks to be nascent enough that a concerted effort to explore unknown and neglected existential risks could yield significant and valuable insights. The original application was for this to be set up as a new organisation; we judged, and Jack agreed, that proceeding with this work in the context of an existing organisation would be more effective, allowing Jack to anchor his work alongside colleagues.
Paul Ingram
- We recommended they be given £21,000. This enables them to run and disseminate the results of a nuclear polling project as to how the knowledge of nuclear winter affects public support for nuclear armament. The window of opportunity seems highly relevant given the conflict in Ukraine, and the current state of knowledge amongst many policy makers around nuclear winter seems fairly poor, so this is a chance to exploit some lower-hanging fruit.
- We also helped make this project more fundable working with the applicants to bring the costs down, using some of the in-house market research expertise that SoGive has.
Social Change Lab
- Social Change Lab conducts research into different types of social movements, their effectiveness, and what makes them successful, you can find their website here. We recommended they be given £18,400 to cover 2xFTE for 2 months, with the possibility of more funding depending on the quality of output from some of the research they’re finalising.
- There’s already a large sum of money spent on social movements in farmed animal welfare and climate change, and we expect some substantial amount to be spent on longtermist activities (e.g. lobbying) in the coming years due to concerns about various existential risks. Given the neglectedness of this kind of research within EA spaces, this work seems valuable if it can improve the allocation of funds around political lobbying, influencing policy, and disseminating ideas to the public.
Closing comments/suggestions for further work
- If any potential donors reading this want their funding to contribute to the funding pool for the next round of SoGive grants, then please get in touch (isobel@sogive.org).
- We came away thinking that there is definitely space for more granting programs within the EA ecosystem, especially those which can provide candidates with high quality feedback. (EDIT: this sentence was written before recent revelations about FTX)
- Although there may be challenges to founding new incubation programmes (e.g. in terms of finding the right talent, funding, focus), it still might be worth exploring further. The evaluation of the year-long Longtermist Entrepreneurship (LE) Project digs into both the potential pitfalls but also the need for this kind of work, and it became apparent to us during this process of the benefits of nurturing and growing new ideas - and not just shunning green shoots.
We’d love to talk privately if you’d like to discuss the more logistical details of running your own granting program. Or if you’d like to contribute to the funding pool for the next round of SoGive grants, then please get in touch (isobel@sogive.org).
Appendix A: Our rating system and a short evaluation
In this appendix we evaluate our evaluation process, as stated previously we went relatively heavy touch when examining grant applications. This was because we thought that there may be some cases where a highly impactful project might not be well communicated in its application, or we may get materially valuable extra information about (e.g.) management quality from video calls; we weren’t sure at which point we would hit diminishing returns from investing time to investigate grants. We tracked our perceptions of grants over time (see below), to see how much our initial impressions changed upon further research and conversation with grant applicants, and in general found they didn’t shift too much with further research.This will also prove useful if we run granting again to see how future rounds match up in terms of assessed potential/quality.
N.B. The sample size is very small (26 applicants), so one should be careful not to over rely on the obtained results/insights.
Skim Rating
After we initially read the applications, everyone who was reviewing a specific grant was asked to rate the application from 0 to 3. (0=don’t fund, it’s not even worth really doing any further evaluation,1=unlikely to fund but maybe there could be promising aspects, 2=it’s possible we would fund with more information, 3=extremely strong application)
- All grants which received an average rating of lower than 1.5 in the initial skim did not progress to call 2 nor received funding.
Multi-variable rating (post call 1)
After we conducted our first video call with the applicants, everyone who was in the call was asked to rate the application on:
- Overall rating from 0 to 10
- Importance (0 to 16)
- Tractability (0 to 8)
- Neglectedness (-2 to 12)
- Delivery (0 to 8)
- Delivery is your confidence level in how likely you think the applicant is to achieve their proposed outputs, and deliver on what they promise too. See section 3 in the main text for a further explanation of delivery.
- Information Value (0 to 6)
- Information value covers how useful this information would be for other contexts and potential feedback loops. See section 3 in the main text for a further explanation of information value.
The below graphs are the sum of above scores, the highest possible score would be 58
- All grants with an average score lower than 33 did not progress to call 2 nor received funding.
Overall rating (post call 1)
- The below graphs look only at the overall rating given after call 1. This was an overall rating given from 0 to 10.
- On evaluating our rating system we noticed that all grants with an overall score lower than 5 did not progress to call 2 (nor received funding), and all grants with an overall score higher than 7.5 progressed to call 2 and received funding. This is useful information if in future rounds we wanted to introduce a hard rule around how applications progress through our granting round (or just to measure the relative quality of various granting rounds).
Mean ratings
- The table below contains the mean scores for each of the 3 metrics mentioned above.
Metric |
Progressed to call 2? | Received funding? | ||
Yes | No | Yes | No | |
Skim rating | 1.86 | 1.48 | 1.83 | 1.52 |
Multi-variable rating | 36.86 | 27.57 | 36.92 | 28.35 |
Overall Rating (post call 1) | 6.57 | 4.71 | 6.42 | 4.96 |
How useful was each stage of the process?
The chart above tracks applicants' progress through our evaluation system. It suggests that both the initial skim and first video call provided lots of discriminating information, whereas the second video call had much more diminished returns in terms of gaining more decision-relevant information. As such if we run SoGive grants again, we might not run a second video call round.
Interpretation of results
- The reason we evaluated our scoring system was because we weren’t sure how informative our attempts at rating the applications on the extra dimensions (ITN++) listed above were (e.g. whether or not the information to noise ratio was sufficiently high for it to be worth ranking in such granular detail).
- However, it seems that the more granular multi-variable rating was the most suitable to predict the funding decision (regression analysis confirms this). Amongst the 6 grants which were selected for receiving funding, 5 were in the 6 best grants according to this metric (those with average mutli-variable rating higher than 33). The overall rating was a much noisier metric, since it did not predict well the outcome for the grants scored between 5 and 7.
- Given the diminished marginal return observed on the outcomes of the second video call, if we run SoGive grants again, we might not run a second video call round.
- ^
It’s explained in more detail here, but essentially the bar is £5000 or less to save a life.
- ^
Based on the guesses provided by Linchuan Zhang here, the marginal cost-effectiveness of the LTFF is 0.01 % to 0.1 % per billion dollars. If this is true, a project of 100 k$ would meet the bar if it decreased existential risk by at least 10^-6 % to 10^-5 %.
- ^
SoGive’s core hypothesis refers to a previous strategy of SoGive around selling the idea of effective giving to the general public and seeing how much interest we got. It entailed directing people to our website which has lots of UK charities reviewed on it and tracking whether or not this analysis changes their donation intentions and patterns, with the plan being to conduct more thorough analysis of which parts of the website and analysis foster the greatest change in behaviour.
I really appreciate that you not only gave feedback to your applicants, but also included common pitfalls in this article!
Wow thanks so much for this effort - as someone who runs a small charity, it's so encouraging to see smallish EA aligned organisations getting a look in for some funding and going through this great process. I have a couple of comments :).
1. As someone working in a global health charity, I often find it strange how little weighting delivery is given in Effective Altruism in general. There are a million good ideas that could have great impact, what matters more is whether the intervention will happen or not. It almost feels like delivery could almost be a multiplier for other scores rather than a smaller score on it's own, or at least it could have a higher weighting maybe?. Does the fidelity of all the other scores not depend in a sense on the project actually playing out as planned?
2. I also have questions about how good a measure importance, tractability and neglectedness translate as a measure for rating an intervention, when I think they emerged in effective altruism for rating a problem. Were the judges using these criteria to rate the problem being addressed or the solution itself? For example on neglectedness some of the solutions (Nuclear winter one, Existential risk one) might be the only people doing that exact thing to contribute to the issue (say a score 10/10), while the issues themselves might be neglected but less so (e.g. 7/10).
3. (Selfish question!) Do you know of other EA organisations or grantees doing anything vaguely similar - smaller grants to smaller organisations? Is there any online database or list on the forum of EA aligned donor orgs?
Thanks so, so much I found your whole process and system very interesting and informative - must be the most transparent grantee of all time ;). Was very encouraging
1. This is a good point, I hope that we weighted heavily enough on delivery but it's not certain. I imagine that sometime next year when we review the progress and impact of grantees this will be something we consider more thoroughly, and will adjust accordingly.
2. Yep - I should have been more specific, the I and N were applied to the problem area as a whole and the T was applied to the proposed intervention. In hindsight, maybe we could have weighted this more heavily in favour of the actual intervention being assessed. This was in part exacerbated by us taking a sort of worldview diversification approach and not having a specific cause area focus. I imagine more tailored funders avoid this problem as they pick a cause area they deem to be important ahead of time and then are only evaluating on the merit of the intervention, whereas we had to incorporate assessments of both the problem area and the proposed project.
3. Hmm - unfortunately not really in the global health space. The Effective Thesis database here has some sources of funds I hadn't heard of, and the funding opportunities tag might be useful, but they tend to be more longtermist focused. If you message me with details of your project then I'd be happy to think about people I could connect you with.
Post summary (feel free to suggest edits!):
SoGive is an EA-aligned research organization and think tank. In 2022, they ran a pilot grants program, granting £223k to 6 projects (out of 26 initial applicants):
The funds were sourced from private donors, mainly people earning to give. If you’d like to donate, contact isobel@sogive.org.
They advise future grant applicants to lay out their theory of change (even if their project is one small part), reflect on how you came to your topic and if you’re the right fit, and consider downside risk.
The give a detailed review of their evaluation process, which was heavy touch and included a standardized bar to meet, ITN+ framework, delivery risks (eg. is 80% there 80% of the good?), and information value of the project. They tentatively plan to run it again in 2022, with a lighter touch evaluation process (extra time didn’t add much value).
They also give reflections and advice for others starting grant programs, and are happy to discuss this with anyone.
(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)