Thomas Billington's EAForum account. I am the co-founder of Fish Welfare Initiative. I also work as a Monitoring, Evaluation, and Learning Associate at The Mission Motor.
My particular areas of passion are:
If you are interested in researching, supporting, or as becoming an early stage hire of an organisation answering questions around how we can create change for animals in LMICs, I would be interested in connecting.
If you want free M&E support for you animal project, email me at: tombillington@themissionmotor.org
I am also considering pro-bono consulting for groups working in LMICs for animals, especially those working directly with farmers. If you would be interested in chatting, let me know.
Strong agree.
And I understand why this is a problem. It can be hard to independently create contacts in these spaces from scratch, and there is an aspect of not knowing what you don't know at play. I'm almost certain I am committing the same mistake in multiple places in my work.
Would be interested to think about solutions here. Like perhaps a group such as Consultants For Impact could take on a role of knowledge dispersal, doing things like getting project management experts to give a talks at EAGs?
For sure!
I would say that EAs are missing large parts of M&E, including:
- The formal setting of key questions / assumptions that form the basis of what you will focus on trying to answer
- Creating formal monitoring frameworks (e.g. a log frame) that takes these questions / assumptions and identifies practical indicators and a method of measuring them
- I think EAs don't use the full diversity of M&E tools. In my experience we tend to over-index on surveys (vs., say, interviews, focus group discussion, or observational data)
- I think considering the frequency of use of surveys, we could generally up-skill in high-quality survey design
- Using a diverse set of evaluation types (EAs generally know about RCTs, but these are such a narrow slice of the available evaluation types)
In general I think we care about M&E but lack experience in the formal processes of it, especially monitoring. So application is patchy and not generally in line with best practices.
I should perhaps clarify that I am mostly talking about the non-global development side of EA. I think their norms for M&E are significantly better.
Intrac's M&E universe is one place to see an overview of what M&E entails. I think also The Mission Motor intends to create more resources on these topics in the future :)
My advice for EAs who want to skill up in a neglected area:
In general, when learning a new skill, Andrea Gunn’s talk on training leaders offers a lot of good insight. I also made a one-page summary of her talk.
I have historically been able to do this upskilling as a side project to my existing job.
Aaron!
Thanks for posting :) I’m coming at this as someone who spent a lot of time running on-farm research at Fish Welfare Initiative (and planning to do more through my new charity). I broadly agree, but I’d add a few caveats:
The core issue with farms as “welfare labs” is heterogeneity (variability). Especially in LMICs, farms are messy, uncontrolled environments where confounding variables easily creep in. That creates a lot of statistical noise. If you’re aiming for high certainty, farms can make that difficult.
Relatedly, on-farm research makes it harder to isolate specific effects. You mention the benefit of insights collected “under real commercial conditions”, and I agree that ultimately effectiveness in the real world is what matters. But there’s also strong value in isolating variables to understand mechanisms. If we’re testing whether pigs prefer straw or wood shavings, we may not want to simultaneously capture differences in how farmers manage those materials. Otherwise, when results don’t support our hypothesis, we don’t know whether we’re observing animal preference or management differences. That’s why the difference between efficacy (does it work in controlled conditions?) and effectiveness (does it work in real-world conditions?) is useful.
So in sum: on-farm research is a valuable tool, but it can’t replace controlled research, and I’d hesitate to frame it as more valuable.
I also think two ideas may be getting conflated: monitoring existing farm conditions vs running experiments on farms. Both can be useful, but both need a clear use-case for the data. I agree, though, that there’s strong potential on both fronts.