Hide table of contents

Epistemic status

Medium confidence. Based on five interviews with practitioners and advisors. Likely reflects common bottlenecks in early-stage Indian animal welfare (AW) organisations more than the full sector.

Why this matters for EA

  • Without monitoring, evaluation and learning (MEL), NGOs can’t learn and donors can’t identify the most impactful organisations.
  • India is leverage-heavy: its animal agriculture sector is vast and growing, so even modest improvements in MEL practices could affect millions of animals.
  • Lessons extend beyond India: lightweight, human-in-the-loop AI tools could support MEL in other low-resource settings too.

TL;DR

  • Some Indian AW NGOs track activities (workshops, animals treated) for donors but don't always connect them to outcomes or learning.
  • Theories of Change (ToC) are often implicit, indicators fuzzy, qualitative data underused.
  • Five interviews show cautious optimism: people are interested in AI tools if they’re simple, contextual, and human-in-the-loop.
  • Most promising modules: quick ToC drafts, indicator suggestions, phone-friendly surveys, 2-page donor reports, and tools to surface themes from qualitative data.
  • Prototype tool: I built a small Lovable prototype that bundles these ideas (ToC, indicators, surveys, reports, qualitative analysis). You can try it here. There’s still plenty of room for improvement, and I’d love your feedback!

Recommendations: Start small, build light modules, design for low-resource contexts, measure usefulness by time saved.

Acknowledgments

This project was made possible by the time, candor, and practical wisdom of practitioners working on animal welfare and MEL in India. I’m grateful to the interviewees who shared their experience and constraints so openly, from The Mission Motor: Jamie, Nicoll, Koushik, as well as other actors in the space such as Chetan (donor-side / EA network), and Thomas Billington (Fish Welfare Initiative; The Mission Motor). Any quotes are used with permission.

Many thanks to the Electric Sheep Future Kind fellowship and my brilliant mentor @Max Taylor  for thoughtful mentorship and concrete suggestions on scoping and recommendations. I also appreciate the public resources that informed this work, including reports from Open Paws, Vegan Hacktivists  and NPC.

Finally, I’m grateful for Aditya SK (Animal Ethics and Electric Sheep) who provided reads and pushback on early drafts. 

All errors are mine.

Executive summary

Some animal welfare (AW) organisations in India are in the early stages of developing their monitoring, evaluation, and learning (MEL) practices. They often collect activity-level data to satisfy donor requirements (e.g. number of workshops hosted) but have yet to develop the tools, frameworks, and capacity to use that data for internal learning or strategic decisions. This gap is particularly stark in small organisations, where MEL could be further advanced, theory of change (ToC) more formalised, and data analysis capacity enhanced.

This project explores how lightweight, accessible AI tools could help AW organisations improve early-stage MEL functions. The focus is on practical use cases, helping organisations articulate their ToC, generate outcome indicators, and build survey or reporting templates, without requiring advanced data science capacity. The report draws on qualitative interviews with MEL support providers and practitioners working with AW orgs in India. Findings suggest strong interest in AI-assisted MEL, but also mentions risks if tools are not contextualised, modular, and designed with human oversight in mind.

Recommendations at a glance

1. Start small and practical.

  • Rather than building one large, complicated MEL system, focus on a set of simple tools that organisations can pick up and combine as needed.
    • For example:
      • A helper to draft a one-page “map” of how their work leads to change (Theory of Change).
      • A tool that suggests a few sensible ways to measure progress toward each goal.
      • A short survey maker that works on phones or paper, with skip-logic built in.
      • A quick report writer that produces a clear two-page update for donors or staff.
      • A basic helper that pulls out themes and quotes from interviews or open-ended responses.


2. Always keep people in charge.

AI should support decision-making, not replace it. Tools should explain why they are making a suggestion, always ask the user to confirm before saving or sharing, and include small reflection prompts like: “What evidence would show this isn’t working?” or “Can your team realistically track this?”.

3. Make tools light and affordable.

Design for real-world conditions: mobile-first, offline-friendly, auto-save, and no-login demos. Collect as little personal data as possible, include built-in consent text, and let users export results in one click to Google Docs, Sheets, or CSV. Surveys should also be easy to run via WhatsApp or SMS.

4. Use locally relevant examples.

Provide ready-made templates that reflect the realities of Indian animal welfare work: common species, typical program goals, and example survey questions. Make these resources available in English and Hindi, open-license, and adaptable so organizations can easily reuse and adjust them.

5. Measure whether tools are actually useful.

  • Define success by how much time and effort they save, not by how many features they offer.
    • For instance:
      • Draft a Theory of Change in under an hour.
      • Pick indicators in under five minutes per outcome.
      • Build a survey in under ten minutes.
      • Draft a report in under fifteen minutes.
      • Get at least 70% survey completion with less than 5% errors.

Introduction

To improve the future of animals, the animal welfare movement has a great opportunity: learn to harness the full potential of technology. From basic websites to advanced tools like AI, tech can help organisations work more efficiently, reach more people, and measure what’s working. Yet many groups, especially smaller ones, lack the resources and skills to analyse the data they already collected. The Tech and Data in the Movement report shows this clearly: data is gathered, but often left unused.

India illustrates the problem clearly. Its animal agriculture sector is vast and growing, so even modest improvements could affect millions of animals. But many Indian NGOs are young and under-resourced, with little MEL capacity. This creates a gap: donors can’t properly assess impact, and NGOs can’t prove it. By focusing on India, this project tests whether lightweight, AI-assisted frameworks can make reporting clearer, more useful for NGOs and more funder-friendly, with lessons that may extend beyond this context.

The project explores how accessible AI tools, like large language models, could support core MEL functions such as building Theories of Change, choosing indicators, and generating donor reports. It also looks at how AI might make qualitative data, like interviews and field reports, more actionable for learning. Smaller organisations often say they would benefit from free, simple tools for data analysis and reporting. The aim here is to identify where MEL challenges are greatest and test whether AI can provide practical support: saving time, improving reporting, and helping groups make better decisions.

A recent global survey by Open Paws shows that the challenge is not limited to India. Nearly half of animal advocacy organisations said they rarely or never use AI. When they do, it is mostly for simple creative tasks like writing posts or editing content, while more complex uses such as grant writing, automation, or data work are uncommon. The survey highlights a clear need for training, funding, and low-barrier opportunities for smaller groups to experiment with AI.

Problem

Thomas Manandhar Richardson’s post explores how data science skills could contribute to the animal welfare movement. It suggests that data scientists could contribute with their skillset in multiple ways, but are often missing in animal welfare organisations due to funding and talent bottlenecks. Though AI tools won’t fully replace data scientists' expertise, they could potentially help bridge the gap in data collection and reporting in animal welfare organisations.

This project aims to contribute to the resource bank of AI tools for animal advocacy, with a focus on MEL practices. There is already a range of existing tools, such as AI prompts for animal advocates, or Open Paws’ library of tools. These tools help address several key aspects of AW organisations’ work, such as editing or automating routine tasks. But there is to this day no existing tool specifically tailored to help animal organisations address their pain points in MEL practices.

Therefore, the question this report asks is as follows:

What type of AI tool could animal welfare organisations based in India benefit from to improve their monitoring, evaluation and learning work?

This report presents the insights gathered from interviews with stakeholders across the animal welfare ecosystem, followed by practical recommendations on how AI tools might be designed to address real-world MEL bottlenecks.

Methodology

This report draws on five semi-structured interviews conducted with stakeholders involved in monitoring, evaluation, and learning in the animal welfare sector in India. Interviewees included MEL advisors, practitioners and support organisations such as Mission Motor. Each interview was guided by a set of open-ended prompts tailored to the participant’s role and expertise. The goal was to surface recurring pain points, areas of opportunity for AI tools, and contextual constraints relevant to MEL practices in this space. Interviews were transcribed and analysed qualitatively to identify themes, convergences, and points of divergence.

Key findings

1. Monitoring & Evaluation is challenging and primarily donor-driven

  • Small AW organisations in India have limited experience with formal MEL systems.
    Data collection tends to be activity-focused (e.g. number of training, animals treated), not outcome-oriented (e.g. reduction in suffering).
  • Organisations often collect data for external accountability rather than internal learning or strategic improvement.
  • There is a mismatch between what donors ask for (impact data) and what early-stage programs are realistically able to deliver (process and early outcomes).
  • MEL is frequently seen as a burden, that is to say a report-writing exercise rather than a learning process.
  • As Nicoll from Mission Motor puts it, most organisations track what they do and who they reach, but rarely ask why change happens. This gap makes it hard to learn from interventions or adapt them. Data is usually collected after the fact, making attribution tricky. Even when behaviour change is tracked, it often relies on self-reports, which are prone to bias. More rigorous methods like A/B testing are possible, but rarely used, they require larger sample sizes or more technical capacity than most orgs have.
  • As one MEL advisor puts it, “People are thinking about measuring impact instead of whether their processes are working.” Many organisations enter MEL from a method-first mindset (e.g., using a survey), rather than asking what question they’re trying to answer. This leads to confusion, inconsistent data practices, and missed opportunities for learning.

2. ToC is central but remains fuzzy

  • Organisations often have implicit theories of change (e.g. “education reduces harm”), but these are not always articulated in structured formats.
  • The Mission Motor’s MEL support programs begin with 2–3 dedicated sessions just to build a Theory of Change.
  • Theories of Change are essential to guide indicator selection.
  • However, these documents are rarely linked to ongoing monitoring practices, leading to disjointed MEL systems.

3. Lacking capacity to analyse data 

  • Even when data is collected, there is limited capacity to analyse or act on it, due to time constraints, lack of training, and unclear indicator definitions.
  • Many organisations lack dedicated MEL staff, and generalist teams often manage reporting with limited formal training, leading to minimal, reactive analysis driven by donor deadlines.
  • This limits organisations’ ability to track progress toward outcomes or adapt their strategies.
  • As Nicoll from Mission Motor noted, data analysis is still largely done manually: “At the moment, the way we work is to analyse the data still by hand.” She emphasised that MEL tools must strike a balance, “light enough so that it doesn’t overburden your staff… but robust enough to give you at least some confidence in the answers.”
  • These constraints suggest a role for lightweight AI tools that can support basic analysis without requiring advanced skills or infrastructure.

4. Qualitative data is underused but AI could unlock it

  • Field-level MEL advisors noted that qualitative research such as interviews, feedback, field notes, holds untapped potential for AW orgs, especially for understanding how and why change occurs. However, it is rarely analysed systematically.
  • These perspectives suggest a strong case for LLM-based tools that assist with coding, sentiment analysis, or identifying recurring themes in open-ended responses. This could enable deeper learning without requiring high technical capacity.

5. Technology use is highly variable

  • The adoption of digital tools depends heavily on whether someone within the organisation is personally comfortable with tech.
  • This person-dependency also makes MEL systems fragile: if the tech-enthusiast leaves, the system might collapse.
  • Some Indian organisations remain tech-averse, preferring manual systems or doing minimal digital reporting.
    “It could be a barrier or be a facilitator. But given that aversion to tech that I've seen, I'm inclined to say that is a barrier. That is going to be a barrier, not an opportunity” - Koushik, Mission Motor
  • Cost remains a barrier; even seemingly small fees can become prohibitive. As Koushik explained, “$20 a month, given the funding situation in India, where it’s very difficult to get in money from outside, that might become a factor.”
  • As Nicoll from Mission Motor explained, digital readiness varies widely, even basic familiarity with tools can’t be assumed: “We’ve worked with small organisations that had not worked with Google Sheets before.” She emphasised that digital barriers go beyond software: “Is staff trained in the use of whatever digital tools we have, whether it's a laptop or a tablet or a mobile phone, particularly for LMIC countries?”
  • She also adds, “From there to AI is still a bit of a jump.” This underscores the need for gradual, low-friction design choices when introducing digital tools in under-resourced settings.

6. AI tools can help but require expert oversight

  • Stakeholders expressed optimism about AI’s potential to accelerate MEL-related tasks such as:
    • Drafting Theories of Change
    • Generating sample indicators
    • Designing surveys
    • Writing donor reports
  • However, they cautioned that LLMs must be used as scaffolding tools in combination with skilled facilitators.
  • The quality of AI-generated MEL content depends heavily on the prompt, context, and human interpretation of results.
  • Without proper oversight, AI may produce plausible-sounding but misaligned or incoherent outputs.

Recommendations (by audience)

Epistemic status: medium confidence. Based on five qualitative interviews. Small sample size: findings likely capture common early-stage MEL bottlenecks in India but won’t apply everywhere. AI assisted with transcript cleanup, clustering, and copy-editing.

1. For AI tool builders

  • Animal welfare orgs don’t need a big “all-in-one MEL system.” They need a few small, easy tools that talk to each other.
  • Build simple modules:
    • A helper to make a one-page map of how a project creates change (Theory of Change).
    • A tool that suggests a few sensible indicators for each goal.
    • A short survey builder (≤10 questions, mobile/paper-friendly).
    • A quick 2-page report writer with charts and highlights.
    • A basic analyser that pulls themes and quotes from interviews or notes.
  • Keep people in charge: Always explain why a suggestion was made and ask the user to confirm. Add reflection prompts like: “Can your team realistically track this?”
  • Design for low-resource contexts: Mobile first, offline drafts, demo without login, auto-save, built-in consent text.
  • Offer local examples: Provide ready-to-use templates relevant to Indian AW orgs, in English and Hindi, open-license and adaptable.
  • Make integration easy: One-click export to Google Docs/Sheets, surveys that work on WhatsApp or SMS, and simple dashboards.
  • Check if it’s useful: Aim for quick wins: ToC in <1 hr, indicators in <5 min, surveys in <10 min, reports in <15 min, ≥70% survey completion, <5% errors.

2. For animal welfare organisations

  • Even small groups can build a “minimum viable MEL system” in 90 days.
  • Name a MEL point person: 2–4 hrs/week is enough to keep things moving.
    • Follow a 3-month plan:
      • Month 1 → draft a one-page ToC with 3–5 outcomes.
      • Month 2 → pick a few indicators and note who will collect what, and how often.
      • Month 3 → pilot a short (≤10 question) survey, refine, then roll out (not always feasible, it depends on what the indicators are and how the data needs to be collected).
  • Keep it light: Only collect what you’ll actually use. Aim for ≥70% survey completion with <5% errors.
  • Make stories usable: Transcribe interviews, pull 3–5 themes and quotes, discuss monthly as a team, and log at least one change.
  • Protect data: Always get consent, minimise personal info, store securely, share only clean exports.
  • Report simply: Send donors a short 2-page update every quarter: highlights, one table, lessons, and next steps.
  • Check signals: ToC by week 4, indicators tracked by week 8, survey live by week 12, and at least one program change recorded.

3. For intermediaries / capacity builders

  • Networks providers can help small orgs get started without overwhelming them.
  • Publish a starter playbook: Show one worked example (e.g. farmer training or campus outreach) with a 1-page ToC, 3 outputs, 2 outcomes, a 10-question survey, and a simple rubric.
  • Host a shared indicator library: 50–70 ready-to-use indicators with clear definitions, example questions, answer choices, and a note on how much effort they take for respondents. This gives NGOs a practical starting point, saves them from reinventing the wheel, and makes indicators easier to reuse across programs. Everything would be translated into local languages and kept open-access.
  • Offer light-touch clinics (60–90 min): Quick ToC drafting, indicator checks, or survey reviews. End with 2–3 clear next steps.
  • Keep a help channel open: WhatsApp/Slack group with pinned templates, consent text, and a reflection agenda. Hold rotating office hours.
  • Support qualitative learning: Consent → transcribe → code → pull 3–5 quotes → 45-min team reflection → log one change.
  • Track your support: Number of orgs helped, ToCs drafted, indicators adopted, survey completion rates, report turnaround time, and logged changes.

4. For funders and conveners

  • Funders can set the tone by funding early experiments and lowering the barrier for small orgs.
  • Support structured pilots: Back 3–5 orgs for 2–3 months to try a simple MEL toolkit. Cover staff, translation, and data costs. Track time saved, module completion, survey response rates, report quality, and user trust.
  • Match asks to org maturity: For small/early-stage groups, accept output and early-outcome reporting tied to a ToC. Ask for a short 90-day MEL plan and a one-page learning note, not polished impact claims.
  • Invest in shared infrastructure: Keep the indicator library alive, support translations, lightweight hosting, and community help channels. Fund clinics and office hours.
  • Add guardrails: Require consent text, minimal personal data, clear access/retention rules, and human review of AI-generated outputs before publication.
  • Share learning openly: Publish anonymised “pilot packs” with example ToCs, indicators, surveys, common pitfalls, and realistic cost notes. Keep a small rolling fund for localisation and iteration.

Guardrails 

  • Link metrics to your goals. Only track things that connect to your Theory of Change.
  • Drop metrics you don’t actually use to make decisions.
  • Keep surveys short. Aim for 10 key questions or fewer. Use skip-logic so people only answer what’s relevant. Add more detail later once the basics are solid.
  • Design for low-resource use. Mobile-first, offline drafts that auto-save, easy export to Google Docs/Sheets/CSV, and survey options through WhatsApp.
  • Grow step by step. Start with simple tools, then add integrations and dashboards as your team’s capacity increases.
  • Always review AI outputs. Insert a quick human check before sharing or publishing anything generated by AI.

Limitations

This report draws on a small number of semi-structured interviews and focuses primarily on organisations already somewhat engaged in MEL discussions. As such, findings may underrepresent the needs of local groups. The interviews were conducted with MEL trainers/donors and not the organisations themselves, so there may be some bias in the types of issues/solutions that have surfaced.  What is more, while early tool concepts were discussed with interviewees, they were not yet piloted in real workflows. Future research should test usability, evaluate actual time savings, and assess whether tools actually improve MEL practices. The AI tools proposed are also intended as support, not substitute for MEL expertise or organisational ownership.

Conclusion

Improving MEL matters. For animal welfare orgs in India, it’s often the difference between activity reporting and genuine learning about what helps animals most. The barriers, thin staff capacity, little training, donor-driven reporting, are real, but they also highlight where small, well-placed interventions could shift things. And there’s good reason to invest: evidence shows that higher-quality MEL is linked to stronger project outcomes, with at least one study suggesting a causal connection.

The building blocks of quality MEL are well-known: a clear theory of change, sensible indicators, data collection tools that actually measure those indicators, and systems that feed insights back into decisions. AI can help lower the barrier to these basics. Light, modular tools (e.g. a ToC sketchpad, an indicator suggester, a short survey builder, a two-page donor report generator) can make implicit logics explicit and nudge teams toward better data practices.

What counts is fit and gradual build-up. A tool that saves fifteen minutes a week and earns trust is more useful than a dashboard that never gets opened. Progress comes from piloting across diverse orgs, iterating on feedback, and keeping ownership in the hands of practitioners, while steadily layering these lightweight starting points into more rigorous, high-quality MEL systems over time.

Call to action

If you’re up for testing or leaving feedback on the prototype, I’d really appreciate it. Try the Lovable MEL Assistant here!

Appendices

A. Interview guide

List of core questions used in the semi-structured interviews:

Monitoring & Evaluation
Do you currently track or measure the results of your work? How?
What kind of data do you usually collect (e.g., number of animals helped, outreach, events)?
How do you analyze or use that data?
Do you write reports for donors, the public, or internally?
What’s the most time-consuming or confusing part of MEL right now?
If you had more support in this area, what would be most helpful?

Tech and AI use
Do you use any tech tools or software for data collection or reporting?

Needs and ideas
If you had access to a free AI tool that could help with MEL, what would you want it to do?
What are the biggest barriers to using more data or tech in your org? (e.g., time, training, staff, language, internet)
Would you be open to testing or giving feedback on a simple AI tool later?

B. Interviewee List and Roles

Jamie, Mission Motor
Role: MEL advisor and program designer

Chetan, Donor Side / EA Network
Role: Funding ecosystem intermediary

Koushik, MEL Facilitator
Role: Monitoring, Evaluation and Learning Associate (The Mission Motor)

Thomas, Mission Motor
Role: Co-founder at Fish Welfare Initiative and M&E Associate at Mission Motor

Nicoll, Mission Motor
Role: Executive Director
 

C. Consent and Reasoning Transparency

All interviews were recorded with informed consent.
Participants were asked for permission to be quoted. Direct quotes used in the report reflect only consented material.
Where anonymity was requested or appropriate, identifying details were removed.
Analysis was grounded in thematic coding of transcripts, with triangulation across interviews to identify shared pain points and opportunities.
Chat GPT was used to support the data analysis and editing process of this document.

 

 

31

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Interesting project! Big fan of work in the intersection of AI and animals though I haven't thought much about using AI to uplift animal welfare work. Overall, it made me think somewhat about Constance Li and her work on the intersection of AI and animals. 

PS. Just played a bit with Lovable tool - it seemed useful, though I'd assume that a bigger bottleneck is to motivate/educate relevant stakeholders in being interested in ToC in the first place.

More from RCC_
Curated and popular this week
Relevant opportunities