Note: I’m undertaking a larger project to assess the potential of improving institutional decision-making (IIDM)—sometimes referred to as “institutional decision-making,” “effective institutions”, etc.—as a cause area or strategy within EA. My goal is to have more public discourse about IIDM. This is my first post in a series, which is the product of my conversations with EAs and research over the last 6 months. My conclusions are all tentative, and I invite all to collaborate with me in understanding and improving institution-focused work in EA.
Thank you to Lizka Vaintrob, Konrad Seifert, John Myers, Nathan Young, Nuño Sempere, Jonathan Schmidt, Ian David Moss, Dan Spokojny, and Nick Whitaker for comments during the writing process. Mistakes are my own.
Improving institutional decision-making (IIDM) has emerged as a term and point of discussion within EA in the last five years. It’s been discussed both as a cause area in itself and as a strategy within cause areas. It’s associated with other EA areas of interests, such as forecasting and political engagement.
There’s an obvious guiding intuition here: institutions are powerful forces in world history and drastically affect global well-being and the longterm future. At the same time, many see EA’s emphasis on IIDM, at least in its current state, as highly uncertain, intractable or, if it is tractable, posing downside risks. There’s a lot to disentangle.
Several long posts and articles take stabs at how to understand IIDM, measure its impact, and where to focus attention. 80,000 Hours considers it among the “second-highest priority areas,” alongside nuclear security and climate change. But in comparison, it has a less certain theoretical basis and has seen few projects involving practice.
In this post, I trace the development of IIDM in EA. In brief, I believe that, because of its history in EA, IIDM is thought of too much in group rationality and forecasting terms, deemphasizing other tractable and worthwhile institutional reforms.
Tracing IIDM Origins
There is a deep literature on how governments function, how companies are organized, etc. Academics, particularly economists, political scientists, and sociologists, have studied institutions. Others in epistemology, psychology, and behavioral science have studied decision-making.
The general arc of IIDM within EA begins with early conversations about how to engage with institutions broadly, dating back to at least the early 2010s. Around the same time—though as far as I can tell somewhat separately—the Rationality Community began growing in prominence as they investigated how to make better—more rational—decisions. These discussions began in Overcoming Bias (2006), the blog by Robin Hanson and Eliezer Yudkowsky, and later the group blog LessWrong (2009).
I understand current conceptions of IIDM in EA as blends of these two threads of institutional and decision-making work. The 80,000 Hours problem profile on IIDM—the first big piece making the case for IIDM as an area of interest—drew predominantly from the decision-making thread. But broader notions of institutional work persisted. IIDM was further elevated by the IIDM working group of 2019-2020, which evolved into the Effective Institutions Project (EIP). Since then, several organizations and projects have done explicitly IIDM-branded work, and more have done work that is implicitly or related to IIDM. There has also been a massive growth of adjacent work, like policy advocacy and political engagement.
I’ll now go a bit more deeply into each of the originating threads.
Effective Altruism and Institutions
Early discussion in EA about institutions seems to have taken place along two somewhat opposed threads within global health and wellbeing (GHB).
One version was a point about spending multiples: EA funds could be used to lobby large institutions (e.g. state actors) to spend foreign or domestic assistance money more effectively—thus compounding impact. Making sure the government of Indonesia, for example, improved its rice subsidy provisions to the extreme poor was of great interest given that billions of dollars and millions of people’s lives were at stake. Because of the scale of government (and other organizations’) aid spending, marginal improvements in spending represented an enormous opportunity for impact.
The second thread of discussion seems to have emerged from a debate around EA, and specifically GiveWell charities, being focused on “marginal” rather than “systemic” change, spurred by critiques of people including Amia Srinivasan in the London Review of Books. Scott Alexander summarizes the debate:
One of the most common critiques of effective altruism is that it focuses too much on specific monetary interventions rather than fighting for “systemic change”, usually billed as fighting inequitable laws or capitalism in general… This same point has been made again and again and again. In response, many effective altruist leaders have gone on to say that they love systemic change and that the movement is entirely in favor of it.
Improving institutions presented an opportunity to make enduring systemic change. There was some investigation of how these changes could be made (e.g. Open Philanthropy on fragile states). Open Philanthropy would later fund areas like immigration reform, housing, criminal justice, and macroeconomic stabilization—all of which have a systemic bent.
The Rationality Community and Decision-Making
Scientific evidence that human biases worsen decision-making quality dates back to the early 1970s (though the first papers on this were published earlier). Cognitive psychologists and behavior economists like Amos Tversky and Daniel Kahneman cataloged systematic irrationalities—cognitive biases—in human judgment. Since then, the fields of behavioral sciences, decision-making, and behavioral economics have exploded, though many of the field's most prominent findings have failed to replicate.
In brief: The Rationality Community and their writings on LessWrong connected these insights with other work in statistics and logic to develop an account of good epistemics and decision theory. They established organizations like the Center for Applied Rationality and Clearer Thinking, which promote tools for improving decision-making. But the vast majority of this work focused on the individual level (though group rationality has been a long-held interest).
Then, Philip Tetlock, in his studies on judgment-based forecasting, found that groups of forecasters could outperform individuals (given certain conditions). With Expert Political Judgment (2005) and Superforecasting (2015), forecasting as a tool received widespread attention for group or organizational use, including from many members of the Rationality Community.
Forecasting’s promising results and adoption by the U.S. intelligence community also suggested that rationality improvements in consequential institutions might be tractable.
Besides forecasting, the work of Robin Hanson and others increased interest in prediction markets as a decision-making technique. The use of prediction markets in government also seemed to be tractable: Hanson famously worked with DARPA on the Policy Analysis Market; there seems to have also been interest from the U.S. intelligence community. Development of the Delphi method and other expert opinion aggregation systems also seemed like promising developments in forecasting.
As I will explain, forecasting events is only one part of the decision-making process, but is probably the most prominent tool, likely because of its evidence base and tractability.
Institutions + Decision-Making
The elevation of institutional decision-making as a cause area occurred sometime in the mid-2010s and seems to have drawn mostly on the latter thread of rational group decision-making. Rob Wiblin spoke at EA Global in San Francisco in 2016 about his best guess at the most important causes. They included AI value alignment, but also “improving intelligence within government, forecasting the future and making better collective decisions.”
A year later (2017), 80,000 Hours published the problem on decision-making in institutions, popularizing the term Improving Institutional Decision-Making. Though it is an ‘exploratory’ post (as opposed to a ‘medium depth’ or ‘in depth’ report), this article is probably still the most-read EA piece on IIDM. The profile emphasizes decision-making tools such as forecasting and includes suggestions as to how people might be able to work within IIDM to do good:
- More rigorously testing existing techniques (for improving judgment and decision-making) that seem promising
- Doing more fundamental research to identify new techniques
- Fostering adoption of the best proven techniques in high impact areas
- Directing more funding towards all of the above
It’s worth emphasizing just how focused the problem profile is on decision-making. It highlights Tetlock, Hansen, George Wright (Delphi method), Stephen Coulhart (Structured Analytic Techniques, SATs) and their potential impact on government.
In contrast, if the piece had drawn more from the other thread of institutional work, it might have encouraged 80,000 Hours readers to simply identify high impact institutions, ascend their ranks, and influence operations from within.
In any case, the 80,000 Hours problem profile coined ‘IIDM’ as a term with a heavy emphasis on decision-making. Some broader conceptions of institutional work persisted.
IIDM Progress
Assessing the advancement of IIDM since 2017 is difficult. For the purposes of this piece, I’ll slightly amend the areas 80,000 Hours recommended for increased investment to consider what progress has been made:
- Meta: developing IIDM theory and internal cause prioritization
- Rigorously testing existing methods for improving institutions
- Identifying new methods
- Fostering adoption of improvements
- Directing more funding
There’s no public list of IIDM projects, though some organizations have in past years been identified and self-identified as working on this cause area. There is generally very little data on any of these points so I will proceed with high uncertainty.
1. Meta: Since 2019, the Effective Institutions Project has worked to build theory and community for IIDM. It has published theory and definitions of IIDM and a prioritized list of target institutions, both of which stand out as notable contributions to the field.
Based on feedback from the EA community, EIP came up with the following definition for IIDM (the only one, as far as I am aware):
IIDM is about increasing the technical quality and effective altruism alignment of the most important decisions made by the world’s most important decision-making bodies. It emphasizes building sustained capacity to make high-quality and well-aligned decisions within powerful institutions themselves.[1]
Notice that this definition diverges from what 80,000 Hours initially laid out. It attends to values, not just rationality, in decision-making. The polling responses also led EIP to expand the scope of decision-making to include:
all of the aspects of the process leading up to an institution’s actions over which that institution has agency. Institutional decision-making, therefore, encompasses everything from determining the fundamental purpose of the organization to day-to-day details of program execution.
This definition emphasizes that the range of decision relevant factors for an institution extends far beyond forecasts. This is a much broader version of IIDM that in some ways recalls the other thread of institutional work.
EIP also attempted to identify the most important institutions for IIDM purposes. The results of their analysis follows. The organization has announced further research to update this model and deepen its understanding of each of the institutions:
This analysis also raises new questions about the scope of IIDM: If one thinks AI capabilities research is intrinsically misguided, there may be some question around whether “improving the decision-making” of Alphabet, OpenAI, Amazon, or Meta, is the right way to understand the problem and solutions. A focus on values, as EIP suggests, may be pertinent and worthy of investment in these cases.
EIP has also done community-building through Facebook and Slack groups, reading groups, and hosting EA Global discussions. IIDM events at EAG continue to attract interest.
2 & 3: On testing and identifying techniques: The 80,000 Hours problem profile identifies “techniques for improving judgment and decision-making” as a key area of interest. A better way of reading this is as “techniques for analysis,” since voting systems and other techniques for how to come to a decision outcome are not discussed. That aside, interest in analytic techniques, particularly forecasting, seems quite strong (EAG workshops, Forecasting Newsletter, etc.). Much of the current work is focused on setting up public prediction platforms, prediction markets, and getting better information for internal EA decision-making/messaging (e.g. biorisks, AI timelines) rather than research catered toward ultimately usable innovations for adoption by government or any non-EA institutions. Nuño Sempere’s Venn diagram provides a helpful distinction:
We could draw similar venn diagrams for the overlap between other tools and IIDM (e.g. research, political campaigns). This is all just to say that despite significant advances in forecasting and decision-making techniques, not all of it should be judged as an advance in IIDM because of their lacking institutional applicability.
Rationality tools aside, research into other kinds of institutional changes that could make the world better seems yet nascent. The area of AI governance appears to be growing rapidly, however, and could likely produce further AI-specific recommendations in the coming months or years.
3: On adoption: Work on adoption of better decision-making techniques or general institutional improvements seems pretty limited (note: somewhat speculative, there are no good numbers to back this up). Nothing to my knowledge has been written about the success of applied IIDM work from EA-aligned/founded organizations. This means there is little collective learning on which kinds of interventions work in which settings.
There may be several reasons for this: Many people and projects conceivably under the IIDM umbrella are working on institutions within a separate cause area (e.g. nuclear and biological risk) or don’t use the label for other reasons. One organization with whom I spoke actually asked me to remove them from a list we assembled for this article of IIDM-aligned organizations because they found the IIDM label unhelpful for messaging and fundraising. This means that it’s not easy to keep track of everyone doing institutionally-oriented work.
Electoral reform is one of the clearest instances of implementation related to IIDM. The Center for Election Science got approval voting adopted by Fargo, ND and St. Louis, MO, and is optimistic about passage in Seattle, WA.
Recent policy work in the U.S. has had an institutional focus and may produce favorable outcomes. Work on science reform, immigration reform, and pandemic preparedness funding (e.g. of BARDA) from IFP, for example, could all be seen as related to IIDM.
It may also be too early to expect successes from some of the known IIDM initiatives. Work on the UN, the UK Future Generations Bill, or the U.S. State Department and NSC (which my organization, fp21, focuses on) is underway. All of these initiatives are very recent—starting in 2020 or later.
4: On funding: We have the most data on this, so I’ll lay it out in some detail.
The most complete picture of the state of investment in IIDM (and EA in general) is fairly outdated. Data compiled by 80,000 Hours from 2020 and presented by Ben Todd at EA Global London in 2021 showed current investment is not meeting leadership allocation goals:
In the 2020 Leaders Forum survey, the respondents were explicitly asked how much they thought we should allocate to “Broad longtermist work (that aims to reduce risk factors or cause other positive trajectory changes, such as improving institutional decision-making)”. (See our list of potential highest priorities for more on what could be in this bucket.)
The median answer was 10%, with an interquartile range of 5% to 14%.
However, as far as I can tell, there is almost no funding for this area currently, since Open Philanthropy doesn’t fund it, and I’m not aware of any other EA donors giving more than $1 million per year.
Todd goes on to write:
There are some people aiming to work on improving institutional decision making and reducing great power conflict, but I estimate it’s under 100.
What’s tricky here is that the data pooled IIDM with “broad longtermism,” so we have to assume that the ideal allocation is an overestimate. But assuming Todd is right and there were fewer than 100 EAs working on IIDM and great power in 2020 combined, IIDM funding and talent is far below optimal allocation.
There could have been many reasons for this large gap between “ideal” and “actual” resource allocation. As Todd mentioned, there didn’t seem to be much high-level funding interest in IIDM or dedicated people investigating IIDM at high levels in established philanthropic organizations like OP. Holden Karnofsky and Alexander Berger have both mentioned their skepticism towards broad longtermist work.
I reached out to 80,000 Hours, and there hasn’t been an update on these numbers since then. But we have some more recent data points to go off of:
The FTX Future Fund’s grant report seems to offer indication of renewed interest in IIDM. One of the Future Fund’s stated areas of interest is “epistemic institutions.” They have awarded $8 million to 21 entities in this category (6% of funds allocated). If we fold in “great power relations,” which might contain IIDM projects on government reform, FTX gets very close to the “ideal” allocation ratio suggested by the 2020 Leaders Forum survey. Lumping in space governance or economic growth projects might carry FTX’s IIDM allocation well over the 10% threshold.
But it’s important to take into account that the Future Fund is most excited about fairly specific aspects of “epistemic institutions”. The website reads that they’re looking to fund: more forecasting, expert opinion aggregation, more rigorous news. My impression is that it would be a mistake to read that as an interest or endorsement of IIDM as a whole (see Sempere’s Venn diagram). Still, funding in other areas that include IIDM-like projects still probably adds significantly to the existing funding pool.
Guarding Against Pandemics and Protect Our Future have become large players in politics. Aspects of their work have an institutional bent. The push for more biosecurity funding, for example, looks quite a lot like some of the early GHB work to influence government spending. In any case, some of the more than $20 million spent in the first part of 2022 alone is likely going towards institution-improving initiatives.
Jaan Tallinn has also expressed interest in “2nd degree” interventions, including “improving governance.” The Long-Term Future Fund says “we welcome applications related to long-term institutional reform,” though they tend to fund comparatively few policy initiatives as pointed out here. The Infrastructure Fund has supported the work of EIP and myself, though grant-makers have expressed high uncertainty about the cause area.
To consider the state of IIDM funding as a whole: On the one hand, there seems to be significant funding available for IIDM projects. Funders solicit institution-focused grants. On the other hand, no one funder explicitly endorses “IIDM”—as presently theorized in either rationality- or institutionally-focused circles—and all have different twists as to what kind of meta-level institutional projects they see as within scope.
There are a few theories that might explain the lacking excitement about IIDM—as well as any discrepancies between funding goals and actual funding.
- The Leader Forum survey may simply not reflect actual funding preferences.
- IIDM may be seen as undertheorized or underexplored and therefore inspire caution around large grants.
- There are still significant concerns about the foundations of IIDM and the downside risks associated with failures (as discussed in the EAIF write-up linked above).
- There may be a lack of talent and high quality projects in the area.
- Other similar efforts, like direct political engagement, might appear more effective.
Concluding Thoughts
The core lesson from this history is EA has drawn upon two somewhat separate fields of work (institutions and decision-making) in its work on institutions. While the 80,000 Hours problem profile that defined IIDM derives much of its analysis from the world of group decision-making, EAs have also taken a more generalized approach to institutions by making use of government funds or pushing for more systemic reforms.
As I will argue in forthcoming pieces, I believe the current state of IIDM underappreciates a wide variety of ways institutions might be improved:
- Talent recruitment, promotion, training
- Knowledge collection and management within institutions
- Internal communication and deliberation
- Value definition and adherence
- Internal prioritization of resources
- Monitoring and evaluation of outcomes
- Creating incentive structures that promote good results
- Organizational capacity
Institutional decision-making depends on all of these points. The best forecast isn’t helpful if nobody pays attention to it.
Furthermore, there is also “IIDM work” that needs to be done, and is too often presupposed:
- Should IIDM be a cause area, or is it better suited as a career path or cause area-internal approach?
- What kinds of reforms actually result in better outcomes?
- Do widespread better epistemic practices translate into good results?
- When should we push for epistemic improvements vs. other kinds of institutional reforms?
- How should we think about value change/alignment?
- Under what conditions is it good for non-value aligned organizations to have accurate beliefs and forecasts?
- If organizations have bad aims, should we seek to worsen their decision-making?
- How can we measure success in IIDM?
- How large are the downside risks of IIDM?
In Part 2, I’ll get into the rationale and theory of IIDM and discuss its downsides.
This is a really excellent summary/state of play for IIDM. Thanks for writing this up!
Thank you for this post, looking forward to the other parts of the series! I enjoy this format of explaining how EA came to care about a specific cause area and what shaped the current understanding of the topic. I'd be interested in more "history of [cause area / some aspect of EA culture]" posts.
"If organizations have bad aims, should we seek to worsen their decision-making?"
That depends on the concrete case you have in mind. Consider the case of supplying your enemy with wrong but seemingly right information during a war. This is a case where you actively try to worsen their decision-making. But even in a war there may be some information you want the enemy to have (like: where is a hospital that should not be targeted). In general, you do not just want to "worsen" an opponent's decision-making, but influence it in a direction that is favorable from your own point of view.
Conversely, if a decision-maker is only somewhat biased from your point of view and has to make a decision based on uncertain information, you may want her to precisely understand the information if the randomness of the decision could otherwise just work in both directions; it may be good if misinterpreting the situation makes her choose in your favor, but it is often much worse if misinterpretation leads to a deviation in the other direction.