Hide table of contents

What is Power for Democracies (P4Dem)?

We are a nonpartisan research organization that evaluates civil society organizations (CSOs); makes recommendations to impact-oriented donors; and produces analysis to help democracy researchers and practitioners improve their understanding of which pro-democracy tactics work and under what conditions. 

I've been Research Director at P4Dem for the past year. This post is a reflection on our first major project, what I think we got right, and where we can do better.

Our premise is that structured prioritization and transparent evaluation can improve funding decisions in complex, low-evidence domains, like democracy. A key focus for us is developing better ways to assess whether that premise holds over time. 

What We Did: ECAPB Project in Brief

Our first project — Effectively Countering Authoritarian Playbooks (ECAPB) — aimed to produce donor recommendations for the founding members of our donor network.

The tactic involves four completed stages and a yet-to-be-completed impact evaluation stage. We started by running two parallel workstreams: prioritizing countries and prioritizing tactics.

Country prioritization used a framework built around four dimensions — Importance, Threat, Tractability, and Opportunity. We combined quantitative data from sources like V-Dem, Civicus, and the World Bank with qualitative country profiles and a Delphi-inspired expert consensus process to narrow from a global pool down to seven priority countries: Hungary, Turkey, Italy, Indonesia, Poland, Argentina, and (separately) the US.

Tactic prioritization involved rapid literature scans of roughly 35 tactics CSOs commonly use — things like strategic litigation, voter mobilization, investigative journalism, and lobbying for anti-corruption measures — scored on a rubric covering quality of experimental evidence, theoretical groundedness, and applicability to context. These scans are available on our password-protected research tools page — reach out by email for access.

To prioritize country threat-tactic combinations, for each priority country we wrote deep-dive reports combining desk research with structured expert interviews (~5 per country), and developed theories of change for the most promising tactic-threat combinations, and identified specific CSOs to evaluate. 

Finally, CSO evaluations involved structured rubric scoring by two independent researchers, qualitative review, and reference checks with previous funders. The CSOs that passed the initial evaluation were then reviewed by two additional research team members and the executive director.

Our first ECAPB recommendations cover organizations in Argentina and Turkey working on strategic litigation and legal support. You can see examples of our country reports, evaluations, and recommendations on our website.

For those who want the full methodological detail, our ECAPB methods paper is linked here. It explains considerably more about our research process, limitations, and confidence levels.

A note on the US: We did not follow the same ECAPB process for the US because we did not have the resources to do a proper investigation at the time. Instead, we selected among initiatives already identified by impact-oriented US researchers, after additional review and vetting – it is linked above with the other recommendations. The US will be the primary country focus of our work this year.

Three Things I Was Asked at EAG SF

I attended EAG SF last week and had the opportunity to get feedback and questions from a couple dozen attendees. There were a few common questions that I think are valuable for the community to engage with, especially as many members are beginning to enter the democracy space.

1. Can we say anything with certainty about what works to counter authoritarianism?

Short answer: not much with precision — but not nothing, and an honest accounting matters.

The research base is a mix of historical case studies, small-scale experiments measuring short-term attitude, and occasionally behavior, change, and some observational studies on larger interventions. Together they give you a rough map of which types of interventions have some evidential support and which have almost none. But for most interventions, in most countries, there is no highly certain, rigorously validated approach with demonstrated large-scale democratic impact.

The finding that many commonly-funded democracy tactics have thin empirical backing is itself worth explicitly documenting, as we do in the tactical literature scans (accessible on the aforementioned tools page). For example, we found reasonable evidence for nonviolent civil resistance, strategic litigation, and advocating for legal and judicial reforms. We found weaker evidence for participatory decision-making and training journalists (though the picture is more nuanced than “it doesn’t work”). For many other tactics, existing research, if any, measures short-term outputs in controlled settings, with very limited data on downstream democratic impact.

An exceptional case is voter engagement in the United States, which is uniquely well-evidenced and could, in competitive races, counterfactually influence the outcome of elections. Please see our US recommendation linked above for more details.

2. Given limited evidence, is rigorous prioritization still worth doing?

Yes — but the value isn’t evenly distributed across the process.

Country prioritization is useful primarily for identifying where civil society intervention is tractable at all. Not every threatened democracy is near a democratic inflection point, and ruling out contexts where resources are likely to be wasted is worth doing even with imperfect data.

Tactic prioritization — even our rushed version — provides a better-than-intuition map of the evidence landscape. Choosing interventions because they have narrative appeal rather than evidential support could also lead to inefficient resource allocation.

The CSO evaluation phase is where I think the most value is created. Even with limited outcome data in the field, there are large differences across organizations in how robust their theories of change are, how much evidence there is to support the theories of change, and how well-equipped the organizations are – with respect to networks inside and outside government and organizational leadership and skills. Identifying those differences systematically likely matters. 

While we believe structured evaluation can add value, an important question is how much it meaningfully improves on existing donor judgment, and developing better ways to answer that question is part of our ongoing work.

3. Is deep evaluation useful for something as fast-moving as resisting authoritarianism?

Our research process took nine months to produce our first recommendations. The world kept changing — Hungary was effectively paused mid-project when new legislation threatened to severely constrain foreign funding of civil society there. Our current model isn’t designed for emergency response; it can’t rapidly disburse funds when a journalist is raided overnight.

But I don’t think this makes our work mismatched to the problem. Organizations that are well-evaluated and well-resourced going into a crisis are better positioned to respond. A vetted list of trustworthy, impact-oriented CSOs has real option value for emergency funders who need to move fast. And as we build out and share our analyses and impact evaluations, our findings can feed back into more dynamic democracy funding.

There’s also something worth saying about scope: for every dramatic authoritarian move that makes international headlines, there are hundreds of CSOs doing the unglamorous year-round work of democracy — investigating corruption, protecting voting rights, monitoring government spending. That work is foundational to democratic resilience and needs sustained, thoughtful funding. Our model is more appropriate for supporting those types organizations, which might suggest some tweaks to our country prioritization approach.

What’s Next

We’ll be posting more donor recommendations and analysis and tools documents in the next couple months. I’ll also write additional posts, including one on this year’s US non-electoral work.

For those willing to share, how does our structured prioritization compare to evaluation approaches in other low-evidence domains? I’d be grateful for perspectives on where our methodology seems aligned with (or importantly different from) best practice elsewhere in EA.

Finally, we’re actively growing our donor network — if you’re an impact-oriented donor interested in supporting our recommendations, we’d love to talk. 

Reach me at Samantha.sekar@powerfordemocracies.org or visit powerfordemocracies.org.

13

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities