What it looks like to make funding decisions in places where good evidence doesn’t exist, and waiting isn’t an option
Most of the grant decisions I’ve made in conflict settings would not meet typical EA standards for evidence.
EA often talks about making decisions under uncertainty. In practice, most funding decisions still rely on fairly strong evidence. That might be RCTs, past data, or at least something stable enough to reason from.
That was not the environment I was operating in.
Across multiple conflict settings, I was making funding decisions in places we could not access, with organizations we had sometimes known for only a few days, and with no real way to measure results in the near term. Waiting for better evidence was usually not a real option. The choice was to act with limited information or not act at all.
A lot of those decisions would not meet typical EA standards for evidence. I still think many of them were the right calls. So, what do you do when some of the information you would normally rely on just is not there?
Start with the decision
The main shift is that you do not begin with the data. You begin with the decision itself, and what it costs to wait. In more stable settings, waiting is often fine. In conflict settings, it usually is not. I saw this repeatedly when local groups were trying to stay operational during periods of instability. A delay of even a few weeks could mean losing staff, losing access, or losing the ability to operate in an area altogether. Those losses were often not recoverable. So the question becomes what happens if you fund something and it does not work, versus what happens if you do not fund it and it would have worked. You are still trying to make a judgment about impact, but you have to do it without much precision.
Check if the plan makes sense
When you cannot measure results, you fall back on whether the plan makes sense. That means pushing on whether the logic actually holds up in that specific place. Does the plan fit the context, or is it imported from somewhere else? Can you see the steps along the way, even if you cannot measure the final outcome? Has anything similar worked in a comparable setting? In one case early in a conflict, we funded a small group that was sharing verified information in areas where telecoms were unreliable. We could not track what people did with that information. What we could see was that misinformation was spreading quickly, this group could reach people others could not, and people already trusted them. That was enough to act. Many proposals fall apart at this step, not because there is no data, but because the logic does not hold together when you look at it closely.
Focus on the team
The other shift is how much weight you put on the people doing the work. In these settings, the team matters more than the plan. We funded things where the plan was not fully worked out but the group had done similar work under pressure. We also passed on proposals that looked good on paper but where the organization had no experience operating in conflict. There was one decision between two groups working on similar problems. One had a clearer plan and better reporting. The other was less structured but had kept working through earlier shocks and had strong local ties. We funded the second. They kept going as conditions deteriorated. The first did not. Groups that look less polished often hold up better when things get difficult.
Learn quickly
You also have to accept that you will not get clean data, but that does not mean you get nothing. We tried to learn as quickly as possible, even if the signals were noisy. That meant paying attention to whether anything changed in the short term, whether a group could actually execute, and whether we could adjust without losing too much. In practice, this often meant starting smaller than the problem would suggest, because it let us update. Some grants were set up that way on purpose. Part of the goal was to support the work, and part of it was to understand what was actually possible.
Pay attention to failure
There are also cases where the main question is not expected impact, but how things could go wrong. Some ideas are not worth the risk, even if the upside looks high. In conflict settings, failure can mean exposing people to retaliation or making local tensions worse in ways that are hard to control. So time goes into thinking through what failure looks like in practice, who is affected, and whether that risk can be reduced. From a distance, failure can sound like a program not working. In reality, it can mean real harm to specific people.
With limited information and time pressure, some decisions will not work out. The goal is not to avoid that. It is to make decisions that make sense given what you knew at the time. That requires being clear about your reasoning going in: what you are assuming, what would change your mind, and what you are watching. Writing this down helped more than I expected. It made it easier to adjust when conditions changed, which they did often.
What is different here
I do not think this is a rejection of EA ideas. The goal is still to do the most good with the resources available. The difference is the situation you are operating in. You have less reliable information, less time, and higher stakes. In practice, that shifts what you rely on. You put less weight on strong evidence at the start, more weight on whether the plan makes sense and whether the group can carry it out, and more attention on what could go wrong. You also accept that you will be updating more often, based on weaker signals. Those are not abstract choices. They come from the limits of the setting.
Scope
Some of this is specific to conflict environments. Most of it is not. Any situation where decisions are urgent, information is limited, and the stakes are high runs into similar problems. You are making important choices without the inputs you would ideally want. The question is not whether to act. It is how to act in a way that is still measured.
Open questions
A few things I still find genuinely hard to resolve.
How do you think about cost-effectiveness when you cannot measure outcomes directly? My working answer has been to focus on whether the mechanism makes sense rather than trying to estimate the magnitude. Can you see the steps working, even if you cannot see the final result? That is not the same as measuring impact, but it is not nothing either.
How do you compare options when the level of uncertainty is very different across them? I have not found a clean answer here. In practice I tended to weight the downside more heavily when uncertainty was high, and accept a lower expected outcome in exchange for more clarity about what could go wrong. Whether that is the right trade-off depends on the situation and I am still working through it.
How do you use local knowledge when it does not fit into standard ways of evaluating evidence? This is probably the question I find most interesting and most unresolved. Local knowledge is often the most reliable signal available in conflict settings, and also the hardest to document or pass on to others. I do not think the answer is to ignore it. I think it requires being more explicit about how you are using it and what would change your mind.
These questions are not unique to conflict settings, but they are harder to ignore there.
If useful, I plan to write a follow-up on how this looks across a full portfolio, and what it might mean for grantmaking in governance and fragile state contexts specifically.
