One of the things that surprised me when trying to develop potential interventions for Shrimp Welfare Project, was trying to understand a clear pathway from A (wanting to help shrimps cost-effectively) to Z (having a demonstrably impactful intervention scaling-up).
As a result of this, I spent a lot of time reading up on and learning about Programme Development Methodologies - trying to Frankenstein something together that I thought would make sense for us. I want to share this work in case it’s useful to other NGOs (in particular for NGOs in contexts similar to ours). I hope it’s useful to you 🙂
A quick note - Though this Roadmap is written in such a way as to outline the steps from A to B to C, etc. in reality, some of these steps may run in parallel (or at least, overlap in some ways). For example, one member of your team may be conducting an Evidence Review, while others are preparing to do some on-the-ground Needs Assessment work. In particular, throughout the Roadmap, you’ll likely continue to revisit and reflect on your Theory of Change, as well as updating and refining your Impact Monitoring.
I also wrote some "Appendix posts" to complement this one that some people may find useful:
- Programme Development Methodologies - Summaries of all the existing methodologies I used as a reference when putting this Roadmap together
- Decision-Making Tools - These can be referred to throughout the Roadmap when a decision needs to be made
- Creative-Thinking Tools - Useful tools to help you brainstorm
1. Key Decisions - The most important decisions we’ll make, and how to make them
There are some useful Decision-Making and Creative-Thinking that it makes sense to learn early on, as they can be used throughout the roadmap. Not least, to help you with some of the most important decisions you’ll make on the path to impact, such as defining your values and choosing your career area based on personal fit. If starting a nonprofit is the best fit for you, then we can move on to deciding on a charity idea, finding a co-founder and selecting your (initial) implementation country. We can now begin to more concretely understand our path to impact through an (initial) Theory of Change.
2. Theory of Change - How and why the program will work
Now that you’ve made your key decisions, you can move on to defining your potential program. You’ll do this by first outlining some Project Objectives, before creating your first (likely somewhat broad at this point) Theory of Change diagram, sketching out the Activities you will undertake which lead to your projects Outputs, resulting in the overall positive Outcomes of your project, which ultimately achieves Impact. This will provide some clarity around the impact you intend to make, as well as the pathway to that impact. It’s likely though that there are some gaps in your knowledge and some uncertainties and assumptions in your Theory of Change, which brings us to an Evidence Review.
3. Evidence Review - A critical review of secondary sources related to our program
We now have an idea of how we intend to make an impact. It’s time to review any evidence that could help us more concretely answer some of our uncertainties and clarify our assumptions. To do this, we’ll undertake a literature review to help us answer these questions (i.e. What interventions have been tested or studied in relation to this program? Under what conditions were they implemented? Were they effective? Why or why not?). Still, however, we will need to get a better understanding of our chosen context, so we’ll go on to undertake a Needs Assessment.
4. Needs Assessment - Seeking out perspectives from the populations we aim to help
We now have a good overview of the relevant evidence in our field, but it’s time to get a deeper understanding of the context we aim to work in. This will involve gathering inputs and perspectives from the community, and more specifically, helping us (again) answer any uncertainties and clarify any assumptions within our Theory of Change. This will involve detailing what we want to learn, and how to gather data to support this. After your Needs Assessment is complete, you’ll be moving on to developing your intervention.
5. Intervention Development - Developing our ideas for theoretically impactful interventions
We now know about as much as we could hope to before actually trying stuff out in the field. So it’s time to start developing our actual intervention ideas. We’ll begin by brainstorming all the theoretically impactful interventions there could be (based on our knowledge so far). This can be broken down into four stages - 1) defining and understanding the problem and its causes, 2) clarifying which factors are malleable and have the greatest scope for change, 3) Identifying how to bring about change: the change mechanism, and 4) identifying how to deliver the change mechanism(s). After this, we should have a bunch of (ideally 10+) ideas that could theoretically be impactful. We’re going to shortlist them to our top 3-5 ideas and test them in the field.
6. Viability & Impact Testing - Testing the efficacy and effectiveness of our most promising theoretical interventions
It’s now time to develop our Implementation Plan(s), outlining how we intend to do small-scale tests (and refinements) on our shortlist of theoretically impactful interventions, in order to ensure they are actually viable in the context we’re working in. Depending on our interventions, we may have a number of potential deliverables that accompany the plan. This will also likely be the point where you start working on your Impact Monitoring systems in earnest. When we’re ready to go, we’ll start doing our small-scale tests on the most promising interventions, by conducting our Viability Testing (these are small scale, and designed to be adjusted as they go). After this, we’ll be doing our Impact Testing in order to test which of our most promising (and viable!) intervention(s) are the most impactful.
7. Growth - Investing in rigorous evaluation or implementation of the intervention
Congratulations! You’ve completed your “Explore” phase. You now have an impactful intervention to either invest in a rigorous evaluation of your intervention (i.e. an RCT or Quasi-Experimental Design) or to grow your organisation around. You’ll likely be spending this time building up your organisational capacity and relying on your Impact Monitoring systems to continuously improve your internal learning (and external accountability).
8. Scale - Expanding our proven intervention to scale
Now that you’ve had the time to rigorously evaluate (or demonstrate the effectiveness of) your intervention, you might reach the point where it’s time to scale. In which case you’ll need to detail your scale strategy, then develop your scale model, before finally, implementing your scaled program.
1. Key Decisions
The most important decisions we’ll make, and how to make them.
At the outset of your path to impact, you will have a number of key decisions to make, such as what your broad charity idea might be, or how to pick an intervention country. This stage is intended to present some guidance around how you might end up answering some of these key decisions.
Before we start though, I wrote an Appendix post with decision-making tools that should help throughout the roadmap. It’ll likely be useful to refer to throughout the roadmap (which is why it’s a separate post), but I’d recommend reading through it first if you can.
[Links: I'll try and share useful further reading at the end of each sub-section. For this first section, most of these will link to resources by Charity Entrepreneurship. In particular, these Key Decisions (and Decision-Making Tools) are discussed in depth in the CE Handbook - How to Launch a High-Impact Nonprofit. This also just seems like an opportune place to recommend applying to the Incubation Program if you're interested in this kind of stuff :)]
You may already have a good sense of this, but I found the process of working through this and trying to concretely articulate my values to be a worthwhile process. Being able to articulate your values will help with the rest of the decisions in this section.
[Links: Good resources on this can be found within the Know Your Values article on the Good Enough website.]
1.2. Career Area
The first of your big key decisions now is going to be your career path. Is founding a charity the best way to go? Or does your personal fit and values suggest a different career path is optimal? Even if you’re pretty set on the idea of creating an idea and seeing it through to fruition (as is the intention of this Roadmap), Non-profit Entrepreneurship might not be the best fit for you (or your idea), and instead For-Profit Entrepreneurship, or Social Entrepreneurship (a double-bottom line) might be a better choice. Additionally there are alternatives, such as working at an existing Effective Altruism organisation, or in Research, or Communications, or Policy, or Earning-to-Give, or developing Career Capital.
[Links: Your best bet here is probably going to be Career Planning Tools, such as those by 80,000 Hours, Animal Advocacy Careers, and Probably Good. Additionally, a good video on this is this talk and Q&A by Charity Entrepreneurship: “Impactful opportunities around and adjacent to charity entrepreneurship”]
1.3. Charity Idea
I think this next step would also make sense if your Career Area wasn’t to found a charity (or a new intervention), but I’m not certain of this, and the language of the rest of this roadmap is written with the assumption that you would be starting a charity.
If you don’t have this already, you’ll likely need to go through a process to determine your impactful idea (there are also "pre-vetted" cause area, such as those identified by Charity Entrepreneurship or CEARCH). Peter Wildeford breaks down how this decision can be made for the question of “What Charity to Start?” using the Multi-Factor Decision-Making tool:
- Come up with a well-defined goal
- Brainstorm many plausible solutions to achieve that goal
- Create criteria through which you will evaluate those solutions
- Create custom weights for the criteria
- Quickly use intuition to prioritise the solutions on the criteria so far
- Come up with research questions that would help you determine how well each solution fits the criteria
- Use the research questions to do shallow research into the top ideas
- Use research to re-rate and rank the solutions
- Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable
- Repeat steps 8 and 9 until sufficiently confident in a decision
[Links: Another good resource I’d recommend that discusses this concept is “How to do Research that Matters” by Karolina Sarek.]
Two co-founders is often the sweet spot. You likely want to find someone who has complementary skills and psychology to you, as well as similar Values. Additionally, you’ll want to find someone who you personally have a strong personal fit with, so that collaboration with this person produces better work than either of you could have produced independently.
The best way to know if you work well with someone is to work with them. So ideally you’ll want to work on some discrete projects with different people, to test your fit with them. Finding a co-founder might be tough, but the Charity Entrepreneurship Incubation Program is designed to help you test your fit with a number of different co-founders, but the basic concepts should apply outside of the context of an Incubation Program like this (I also promise I’ll stop talking about CE and the Incubation Programme so much after this first section, but they really are the experts at this first stage of the Roadmap).
1.5. Country Selection
We’re now at the final step of our Key Decisions - it’s time to select (or narrow down) your implementation country. This is not set-in-stone, though bear in mind that once a location is selected, it can be difficult to change, and location-specific knowledge and skills that you invest in building may not be directly transferable to other contexts.
The process of determining a country is both practically decision-relevant (it gives you a place to start), as well as helping you think through the factors to consider for selecting an implementation country (such as the scale, neglectedness, and tractability for the problem in that country - what are the proxies you’ll be using to assess that, and how will you answer these questions).
2. Theory of Change
How and why the program will work.
A Theory of Change (ToC) outlines the logic behind a program and explains how it is expected to bring about impact. It demonstrates how the program's activities and use of resources will lead to changes in behaviour and improvements in the lives of people (or, in our case, shrimps). This theory is an important foundation for any development program.
The Theory of Change should be referred to (and if needed, updated) often. It should function as a living document. Ideally, it should be returned to after every subsequent (major) stage in this roadmap (i.e. Evidence Review, Needs Assessment, Intervention Development etc.) as at every stage you’ll be learning more and more, and those insights can and should be reflected in your Theory of Change.
[Links: New Philanthropy Capital's "Theory of Change in Ten Steps" is a deep dive into this topic.]
2.1. Project Objectives
Project Objectives detail a project’s intended results, which contribute to improving A, B, C, for X, Y, Z. Developing them allows you to orient yourself (and the team) to ensure you’re on track, motivate your team, demonstrate your work (to external stakeholders, and funders), and aids with your Impact Monitoring down the line. They become the foundation for your project, and ultimately will likely form the “Impact” section of your Theory of Change diagram, which you can then work back from, to get a more grounded understanding of your project’s Activities.
As with the Theory of Change diagram (in the next section), the Project Objectives will likely not remain stable over time, and should be regularly revisited, and updated if required, to quote Social Impact Navigator: "Addressing project objectives is not simply a one-time task limited to the planning phase. Rather, this task should be regularly revisited. After all, the needs of the target group or the project context may change over time...".
2.2. Theory of Change diagram
The Theory of Change diagram links (visually) how we expect to achieve Impact through our Activities (based on theory and evidence, such as: academic evidence, lived or professional experience, and sound reasoning).
There are a number of different ways to do Theories of Change, and they can be as complex or as simple as you need them to be. Often it makes sense to produce a simple, clear Theory of Change, accompanied by a Narrative, to share externally (i.e. with funders), in addition to a more complex, less-polished but detailed Theory of Change, in order to tease out assumptions and get internal synthesis among the team.
Although Theories of Change are often read left to right, it often makes sense to create them right to left. As in “for Y to change, we need X as a precondition”, which is why it often makes sense to start with defining your Project Objectives.
You’re not finished when you’ve drawn out your diagram, there’s still one last step, which is to identify the assumptions that your Theory of Change rests on. It likely makes sense to try and track the links between every box and node in the process, as well as to really articulate them in detail. For example, you might label every step in your Theory with a number, which can then be referenced in a table/spreadsheet which can allow you to write more detail and clarify points about each step, without cluttering up the diagram.
Break down every assumption to be as concise as it can be, such as breaking down ‘Users will pay for this model’, into ‘Users are interested in the model’ and ‘Users will have the means to pay for it’.
[Links: IDInsight's Impact Measurement Guide has a really useful Theory of Change section (and case study). Additionally, Step 10 of New Philanthropy Capital's "Theory of Change in Ten Steps" how to identify assumptions in more detail.]
3. Evidence Review
A critical review of secondary sources related to our program.
An evidence review is a comprehensive examination of existing research on a specific program. It helps to understand what is known about the issues the program aims to address and its design. An evidence review can help answer questions such as: What interventions have been tested or studied in relation to this program? Under what conditions were they implemented? Were they effective? Why or why not? Conducting an evidence review can provide valuable insights into the program and inform its development and implementation.
A literature review is a systematic review of the relevant sources to look for evidence related to your program. The primary benefit of this is to have decision-relevant data for developing your intervention, but in many cases it makes sense to also write up and publish the literature review so that it can be shared with interested parties.
It’s worth noting that in some cases, you may be able to find a high-quality literature review compiled by someone else. If you’re confident in someone else’s work, this step can be skipped.
- Annotated Bibliography - The first step is to scour the literature for evidence that might be decision-relevant, and compiling these references into a bibliography in a single place. During this process, it helps to add key takeaways (or quick write-ups) for each source.
- Weighting Evidence - Then you need to weight the quality, generalisability and usefulness of the evidence you’ve gathered, likely using a Weighted Factor Model:
- Quality - Assessing the quality of this evidence, using criteria such as:
- Risk of bias
- Consistency of effect
- Publication bias
- Generalisability - Is it possible to use evidence from another context to this program, J-PAL recommends a four-step framework to understand this:
- What is the disaggregated theory behind the program?
- Do the local conditions hold for that theory to apply?
- How strong is the evidence for the required general behavioural change?
- What is the evidence that the implementation process can be carried out well?
- Usefulness - Finally, try to assess whether the evidence enables you to understand:
- The scope of the problem
- The major constraints to solving the problem
- The success of previous attempts to solve the problem
- The links and assumptions in your Theory of Change
- Quality - Assessing the quality of this evidence, using criteria such as:
Write-up - Finally, you can write-up your findings from previous studies (highlighting the relevance of evidence to your program, as well as their rigour) as well as any remaining evidence gaps (particularly links or assumptions in your Theory of Change).
[Links: The Evidence Review section of the ID Insight Guide informed most of the details of this sub-section]
4. Needs Assessment
Seeking out perspectives from the populations we aim to help.
To be effective, a program must address the needs of the community it serves. Gathering input and perspectives from the community where the program will be implemented allows for a thorough understanding of the problem the program aims to solve. A needs assessment is a tool that helps identify the challenges facing the target population and the broader economic, political, and social context in which the program will operate. Incorporating a needs assessment into program planning can help ensure that the program is responsive to the needs of the community.
Effectively you’re trying to get answers to some key questions in the context you’re trying to work in:
- What do we want to learn? - What questions do we still not have answered? (At this point it’s worth just thinking about the questions you actually want to answer, you might end up asking the actual questions differently in the field, but the first step is to understand what you actually want to learn).
- What assumptions are in our Theory of Change? - The heavy lifting here should already have been done during the development of the Theory of Change, but it’s worth revisiting and potentially updating here)
- How well do our assumptions need to hold? - Which of these assumptions are the most important? Which ones are unimportant? It might make sense to prioritise with a matrix grid at this point, assessing your assumptions against your certainty and risk for each.
- What data do we need? - There’s much more detail on this in the Impact Monitoring Appendix. Depending on how robust your data needs to be at this stage, it may make sense to skip ahead to that, to get some tips on collecting high-quality data.
The Needs Assessment process will then follow these broad steps:
- Collect data - Likely in a systematised way, such as throgh surveys, focus group discussions, and in-depth interviews
- Analyse the data - With quantitative data, you’re likely calculating summary statistics and comparing them to your assumptions, qualitative data can be more time-consuming and varied
- Take Action - Such as writing up our findings (likely in a Scoping Report or similar) and Updating your Theory of Change, or adding any new areas of research for your Evidence Review.
[Links: This step of the process largely follows the Needs Assessment section of the ID Insight Guide.]
5. Intervention Development
Developing our ideas for theoretically impactful interventions.
We now know about as much as we could hope to before actually trying stuff out in the field. So it’s time to start developing our actual intervention ideas. We’ll begin by brainstorming all the theoretically impactful interventions there could be (based on our knowledge so far). This can be broken down into four stages - 1) defining and understanding the problem and its causes, 2) clarifying which factors are malleable and have the greatest scope for change, 3) Identifying how to bring about change, and 4) identifying how to deliver the change mechanism(s).
5.1. Problem Definition
Define and understand the problem and its causes.
Based on the previous stages of the roadmap, we’re confident that we’ve identified there is a problem that requires an intervention. The next step is to clarify the problem with stakeholders and existing research evidence.
- Definitions - Clear and detailed terms to avoid ambiguity and confusion. Now that you’ve gone through the Evidence Review and Needs Assessment stages, you’ve presumably learned a whole lot about a very niche topic. It’s time to catalogue your definitions somewhere to add clarity to the remaining steps in the intervention development.
- Causal Pathways - What shapes and perpetuates the problem? Having defined the problem, we need to understand the immediate (proximal) and underlying (distal) influences that give rise to it. It is only by understanding what shapes and perpetuates the problem (the causal pathways) that we can identify ways to intervene.
- Current Standards - What interventions/policies exist and why they are inadequate? This may look very different depending on your cause area. For us, this meant compiling all the Standards we could find such as standards of Certification schemes, any standards in place by governments, specific corporations, or at the producer level.
[Links: Googling “Causal Pathways” or "Problem Tree" is useful to give some examples, such as this. But the takehome is that drawing out the pathway is really useful at this stage]
5.2. Scope for Change
Clarify which factors are malleable and have the greatest scope for change.
Most interventions take place within systems and exert their influence by changing relationships, displacing existing activities and redistributing and transforming resources within. Some stakeholders may only change their behaviour if other stakeholders do so too. Interventions that address complex problems through multilevel actions are more likely to maximise synergy and long-term success.
Which of the factors that shape the Problem have the greatest scope to be changed - this could be at any point along the Causal Pathway. It probably makes sense to work through and analyse all of the Causal Factors (i.e. “steps”) identified in the Causal Pathway - what is the importance of the factor? Is there evidence on willingness and ability to change? etc.
5.3. Change Mechanisms
Identify how to bring about change: the "change mechanism".
Once the most promising (and modifiable) Causal Factors have been identified, how do we achieve this change? What are the Change Mechanisms we can use (i.e. the process that triggers change for individuals, groups or communities).
- Populations - Which populations benefit from our intervention. In the case of animal organisations, this likely includes both the human and non-human animals. Try to be specific if possible, including things like geography, tractability, economic factors etc.
- Change Mechanisms - Change mechanisms are the key actions or processes that lead to change. Again, try to be as specific as possible to tease out important nuances - which causal factors would this affect? By how much? How much would it cost? Is it Scaleable? etc.
- Behaviour Change Strategies - Try to identify strategies that work with your specific Causal Factors, Populations, and Change Mechanisms. There is a really useful framework we can use to help us - SparkWave’s Conditions for Change tool. Effectively this tool breaks down the conditions for change into 10 steps, which themselves are split into three key stages:
- Making a decision to adopt the new behaviour (steps 1-3)
- Performing a number of actions that comprise the new behaviour (steps 4-9)
- Ensuring the continuation of the relevant conditions for success as time passes (step 10)
[Links: Behaviour Change Strategies - SparkWave’s Conditions for Change tool]
5.4. Intervention Design
Identify how to deliver the change mechanism(s).
After identifying the potential change mechanisms, we need to figure out how best to deliver them, which is likely to be target-group and context-specific.
- Intervention Longlist - What are the possible interventions we could test? At this point, the previous three sections should all come together to form a longlist of potential interventions. You should have a good sense of the Problem(s) you want to solve (as well as the Causal Pathways that perpetuate them), the extent to which the Causal Factors are malleable, and what the potential Change Mechanisms are (along with Behavior Change Strategies for your target Population(s)). To create a longlist, you can use a Weighted Factor Model to shortlist the most promising intervention ideas.
- Intervention Shortlist - Which of our possible interventions are we considering testing? Any ideas that score highly on your Intervention Longlist should end up on your Intervention Shortlist (this will also likely be somewhat of an ongoing open process - people should feel free to work through the Intervention Development Process and add new intervention ideas to your longlist, and if they score well, they should be able to do a deeper dive through the shortlist tool). Effectively you’ll probably want to do something similar to Charity Entrepreneurship’s “Shallow Dive” stage of their research process. Exploring three key areas of the intervention idea (“Evidence Base”, “Cost-Effectiveness Analysis”, “Paths to Failure”), and summarising.
- Program Theories - What does the theory of change look like for this intervention? Program Theories are just a Theory of Change diagram. I just find it useful to separate the idea of an overall “broad” Theory of Change, with these more “in-the-weeds” detailed Programme Theories, as you’re now able to fill in much finer detail, with a specific focus on the intervention we want to test (with a focus on assumptions that need to hold, or similar Crucial Considerations).
6. Viability & Impact Testing
Testing the viability, and then the potential impact of our most promising theoretical interventions.
Essentially everything up until this point has been part of a somewhat “theoretical” phase of work (all leading up to generating a list of all possible interventions that could be impactful, and whittling these down to the ones which are the most promising (something like 10+ ideas, ideally more).
Now we can move on to the the “viability” and “impact” phases:
- With Viability Testing, you’re trying to test the most promising of these theoretical interventions in the real-world, to get an understanding of the actual viability of these interventions in the context you’re working in, and learn as much as you can as you do it (something like 3-5+ ideas).
- With Impact Testing, the interventions that have now demonstrated themselves to both be theoretically promising and viable in your context, now need to be tested to assess their actual impact, with the aim to then scale-up the most cost-effective intervention (something like 1-3 ideas).
[Links: Viability & Impact Testing is effectively the “Lean Startup” phase of your intervention, so I’d recommend reading the book - the goal is to test your hypothesis, iterate, and achieve validated learning.]
6.1. Impact Monitoring
How will we Monitor, Evaluate, and continuously Learn from our interventions?
We can now use our Theory of Change / Programme Theories to identify key questions for our Impact Monitoring:
- Activities - Are you working on what you said you’d work on?
- Outputs - What tangible outputs did you produce?
- Outcomes - Did your outputs achieve their intended short-term result?
- Impact - Did your outcomes improve the world?
- Cost-Effectiveness - How much good did your impact produce per dollar?
Essentially, we can use our Theory of Change diagram to understand what data we should collect in order to assess our impact. As a result, there will be two key outputs that we’ll create at this stage, a Monitoring & Evaluation system, and our Cost-Effectiveness Analysis (both of which are discussed in more detail in the Decision-Making Tools Appendix post). You’ll likely begin with a simple version of each, and reflect and iterate on after each subsequent stage of the roadmap.
As a quick aside - your Impact Monitoring systems (much like the Theory of Change) - should be referred to (and if needed, updated) often. It should function as a living document. Ideally, it should be returned to after every subsequent (major) stage in this roadmap. From here on out, your Impact Monitoring system(s) are going to be your trusty compass, and should become more detailed (or streamlined) depending on your intervention(s) needs.
6.2. Implementation Plan(s)
What is our plan for the interventions we want to test?
Whether you’re testing an intervention as part of Viability Testing, or as part of Impact Testing, you’re likely going to need an implementation plan. The implementation plan clarifies the conditions and resources necessary for successful implementation and the related risks and assumptions.
The key difference to bear in mind when writing the implementation plan for a given intervention is that for Viability Testing the primary aim is the learning value (does this intervention work in this context? Are there any tweaks we can make along the way to improve it?). Whereas the primary aim of Impact Testing is to assess the (cost)-effectiveness of our intervention(s). So the plans might vary somewhat depending on that context. For example, the plan for an intervention that is being tested for Viability will likely be less detailed, and you’ll likely want to build more slack into the plans, to allow for things to go wrong, or to change the way you’re doing things along the way. You’ll also likely spend less on data collection, or prioritise a few proxy data points during this phase, rather than doing a more focused look at evidence of effectiveness.
The following sub-sections are potential deliverables that could be part of your implementation plan. It’s largely based on the implementation plan we created, and yours might look quite different (depending on the intervention(s) you’re testing). But the point is that the implementation plan isn’t the only document you’ll be creating at this stage, but rather, the implementation plan will be your guide in helping you determine the deliverables that will be required during your testing phase.
- Intervention Delivery SOP - The Standard Operating Procedure (SOP) outlining the delivery of the intervention.
- Data Requirements - At this point you’re just trying to get an understanding of what data you need - the “What” of the data (i.e. to get the depth and breadth “just right” in order to make a decision or understand a crucial consideration).
- Data Collection SOP - The “Who, When, and How” of the data. Your SOPs should make clear the answer to the following questions:
- The breakdown of data should be for each form
- The number of people required for each aspect of each form
- The equipment required for each form
- The frequency of filling in each form
- The expected time to complete each form
- Supplementary information that’s useful for the data collector to have (i.e. detailed, clear instructions on how to take a water quality reading)
- Purchasing equipment - An output could just be the purchase of necessary equipment, in which case you can likely just link to the budget or receipt etc. (or a document outlining your procurement process, such as why you decided to buy a specific item).
- Hiring staff - Additionally, you may need to plan for an increase in staff in order to carry out your intervention, and this should be planned in (both in terms of time spent recruiting and onboarding, and staff salary).
- Data storage - You should have a clear outline of how the data will be stored (if you’re collecting data electronically, this may be as simple as making sure the data is compiled and backed up to the cloud. But if it’s on paper, who is responsible for collecting the data together and processing it, i.e. likely translating it into a spreadsheet - probably getting a clear RACI in place).
- Data analysis - Who is responsible for analysing the data? What are the expectations (you’re probably trying to answer specific questions in the data as outlined during your CART analysis, rather than a general “exploration” of the data). By when?
- Scaling Down SOP - You might also include an SOP for scaling down if it is determined that it is less effective, and you don’t want to cause unnecessary harm by not smoothly transitioning away from your active implementation.
- Report - It probably makes sense to write up your learnings from your viability tests in a report (even if these ultimately remain internal). This both helps to formalise the results (and ensure internal cohesion on what the results mean), but also so that the results can be shared with interested parties to inform other NGOs.
6.3. Viability Testing
Tests and refinements on a small scale.
It may turn out that some of our interventions are not viable (in this context) in which case, they should be removed at this stage. As a result, we should prioritise tests that can shed light on multiple major crucial considerations. Additionally, we should aim to anticipate possible unintended effects of the intervention and minimise any that might be harmful.
The point of this step isn’t to unwaveringly stick to the implementation plan, it’s highly likely that small tweaks can be made to your plan along the way, in which case, go ahead and make those tweaks. The point of this step is to learn and iterate - we’re going to be doing a robust Pilot next as part of the Impact Testing phase. This is not your pilot - so don’t be afraid to make changes to your intervention at this stage - in fact, that’s the whole point of this stage!
6.4. Impact Testing
Collect sufficient evidence of effectiveness to justify rigorous evaluation/implementation.
Depending on the nature of the results of your Viability Testing, this step could look very different. For example, if you were testing interventions that:
- Seems Promising - Then you might just need a few tweaks, reusing your original Implementation Plan, and updating it somewhat based on the learnings (such as tweaking the Programme Theory, or increasing the number of beneficiaries, or increasing the Impact Monitoring). In which case the work you did on Viability Testing for this intervention might naturally evolve into your larger Impact Testing phase.
- Needs to be tweaked significantly - but in such a way that you’re confident that the new intervention is viable. For example, you may have been testing a couple of different intervention ideas, and on their own they weren’t particularly viable, but based on some late-stage learnings in the process, you have strong reason to believe that a new intervention that combines the two (or aspects of the two) will be impactful. In such a case, it probably makes sense to create a new Implementation Plan here, with a fresh Programme Theory and SOPs etc.
This is the phase most people refer to as the Pilot. We now know that the intervention we are testing is viable in the context we’re working in. The goal here is to understand the Impact of the intervention, in particular the (cost)-effectiveness of the intervention. So there should be a stronger focus on a robust plan which likely will not need to be tweaked/changed (much) once the Pilot has begun. Essentially the big difference here is that you’re expecting that the intervention you’re Piloting here is your organisation's intervention (or one of them). So you want to replicate as close to what this intervention will look like as your future Business-As-Usual as possible (of course, you can, and should, still update your intervention based on new evidence during and even after the Pilot, but the intention is to have already unearthed all the learnings you can, so you shouldn’t have to). You’re likely testing your intervention on a much bigger scale, and the Implementation Plan you create should reflect that. You’re also likely trying to calculate an explicit cost-effectiveness of your intervention that captures nuance, so your implementation plan should also reflect that.
It’s quite possible you’ll just be doing a single Pilot here, Impact Testing an intervention you’re pretty confident in. But it’s also possible you have a few interventions that seem promising, in which case you’ll be doing 2-3 pilots (potentially at the same time). In fact, that’s even potentially preferred, as the goal at this point is to figure out what the most impactful intervention is, in order to begin implementation in earnest. Depending on the results of your Pilot(s), you may either decide to move on to implementing the intervention (if successful), moving back to the Viability Testing phase (if there is still more learning/tweaking to be done), or even shutting down the project altogether. If you’re testing multiple interventions, you may simply find one is the most cost-effective and then scale that one up (in which case, you’re hopefully following a “Scaling Down SOP” for the interventions that will no longer continue).
Investing in rigorous evaluation or implementation of the intervention.
You’ve determined which intervention is the most impactful, it’s time to roll it out. In the beginning of your Exploit phase (I’m not a huge fan of this terminology, but Explore/Exploit does succinctly capture a useful idea), you’ll likely be in a stage of slow, sustainable growth, finding beneficiaries and rolling out your intervention, increasing your organisational capacity (i.e. staff / equipment / fundraising) along with your growth.
Alternatively, you may determine that the next step is to invest in a Randomised Control Trial (RCT), or a Quasi-Experimental Design (QED). I’m hoping to add more information on these in a future update of the Roadmap.
At this point, your key tools are likely your Theory of Change and Impact Monitoring systems. We essentially want to update our Theory of Change to reflect the new understanding of our intervention after all that Testing. From there, we want to make sure that our Impact Monitoring systems are in place and robust and accurately reflect our detailed Theory of Change. From here, we let the ship take its course, learning and improving as we go.
And for some projects, this might be enough, a slow and steady growth until we reach a sort of “impact equilibrium”, likely depending on your cause area being constrained in some way (i.e. talent or funding constrained), or even just because you work on an issue that is neglected and tractable, but not particularly scaleable. However, for those organisations working on a potentially scalable solution (for example, by scaling up operations in one country, or attempting to explore replication in other countries), it’s time to think about our scale strategy.
[Links: “Managing to Change the World', along with the resources and templates they provide on their website. Especially if a big part of your Growth phase is building organisational capacity and resilience. Additionally, the “Improving Social Impact” section of Social Impact Navigator is useful here, in particular the checklist for becoming a Learning Organisation.]
Expanding our proven intervention to scale.
Now that you’ve had the time to rigorously evaluate (or demonstrate the effectiveness of) your intervention, you might reach the point where it’s time to scale. Spring Impact’s “Scaling Impact Toolkit” outlines three key phases for scale:
They have released a bunch of tools for Strategy, including the Scale Readiness Diagnostic, Problem Definition, Intended Impact, Design Your Core, Identify Risky Assumptions, Test Risky Assumptions, Sweet Spot for Scale, Measure what Matters, and Doer and Payer at Scale. Their tools for the Model and Implementation phases are still in development.
As these tools are still in development, I'm going to leave this section fairly brief for now, but hope to add to it in a future update of the Roadmap.
[Links: The Scaling sections here follow the path laid out in Spring Impact’s open-source “Scaling Impact Toolkit”.]
- Programme Development Methodologies - Summaries of all the existing methodologies I used as a reference when putting this Roadmap together
- Decision-Making Tools - These can be referred to throughout the Roadmap when a decision needs to be made
- Creative-Thinking Tools - Useful tools to help you brainstorm