In general, organizational growth and management is a hard problem. Most early startups fail, most of those that find initial funding and partial success also fail, and most of the successful organizations find that there is a loss of many of the critical features of the early organization that made it special. Some of the lessons from the general case seem worth reviewing, to see how they apply to effective altruism organizations.
Epistemic status: Exploration and literature review of a topic I have fairly advanced understanding of, and formed a significant part of my graduate coursework. The background is based on fairly well understood frameworks that have been helpful in many areas. The specific analyses and recommendations are more speculative, and are primarily intended to invite consideration and spur discussion. I have discussed many of these ideas in the past with various individuals involved in EA, but thought they might benefit from a more public explanation.
Caveats and Special Considerations
Before discussing what the general literature has to say, we note ways in which Effective Altruism differs from the typical context for this research. The current context is unusual in several ways.
1) Effective Altruism organizations are finding a sudden lack of funding constraint. When businesses have relaxed funding constraints, it is because they either have revenue themselves, or because venture capital or similar markets have found that there is a clear path towards revenue or success otherwise defined. EA organizations are instead finding funders that view the problems they are addressing as critical and worth funding, and those funders are often asking the organizations themselves what the best path forward is.
2) Early stage startups usually explore fairly widely until they find a path to commit to. EA organizations are exploring and exploiting simultaneously, and the usual strategy of building narrow capacity to execute an agenda is difficult when the agenda itself is malleable. When commercial organizations grow, there is at least some set of clear short term metrics for success - user growth, sales, profitability - that do not exist for nonprofits. Nonprofits, on the other hand, typically have narrowly defined roles, where they are expected to perform a given type of task. Cancer research organizations are expected to find the most promising research areas in cancer research, political advocacy organizations are expected to promote the policies they espouse, and so on.
3) Many Effective Altruism organizations are unusually values-definition-fragile. Unlike clear and immediate goals like DALYs/Dollar, the metrics being considered for many focus areas (like long term risks, or animal suffering) are hard to measure objectively in the short term, subjective understanding and expectations are critical for evaluating not only plans, but also success.
4) Growing organizations often have cultures dictated in large part by the culture of workers than join or lead the organization. (Consider the US's formation of the DHS: organizations like FEMA has their cultures changed in part to match the way the intelligence-community works, organizations like the FBI were pushed to be less about law enforcement and more about national security and counter-terrorism, etc.) If an EA organization hires academics, the culture of freedom of exploration and expression may help, but the culture of publications of results and chasing citations may not. If they hire policy analysts, the rigor and methods may be helpful, but typical concerns about defensibility and clarity of analyses, rather than allowance for uncertainties and subjective values, and concerns about political viability or the image created by reaching certain conclusions may distort analyses.
5) Typical organizations need to find people who share their vision, and are motivated, or they need to create alignment via clear incentives. Effective Altruism has the benefit of a vision shared by many prospective employees that goes beyond the operational or task focus. This makes values drift somewhat less problematic, though it does not solve the problem.
Values Drift and Scaling
When organizations grow, it is unusual for them to maintain original values. This occurs for several reasons, some of which are more relevant than others. Some less relevant reasons are profit motive, pressure from funders or public shareholders, and initial lack of clarity about goals. Some more relevant reasons are scaling and hierarchy, organizational types, tasks and pressures, and working with or bringing in experts or workers with different values who may have adapted to or default to different organizational styles and culture.
Scaling, hierarchy, and increasing bureaucracy
I have made a long-form argument here on Ribbonfarm about why scaling requires bureaucracy. The argument as stated there doesn't quite apply to EA organizations, because to whatever extent that value alignment precedes employment and incentives coincide, there is little need for the same type of structure. At the same time, coordination is particularly important in realms with "unilateralist curse" issues, so a similar structure is needed anyways.
There are advantages and disadvantages to this type of rigidity and structure. The returns to scale for organizations make somewhat larger organizations cheaper to run, since overhead can be spread out. At a certain point, it is possible that the needed overhead grows beyond the point of positive returns to scaling, and this depends heavily on context and cost structures. In organizations like manufacturing, costs often scale sub-linearly until some capacity is reached. Beyond that point, management overhead may grow much more quickly, for instance, managing multiple sites and different needs. (On the other hand, multiple factories that are able to manage supply chains in an integrated fashion across many factories may or may not provide additional scaling advantages.)
It is very possible for independent teams to work on loosely coupled goals. For example, it may be useful for distributing anti-malaria bed nets in different regions to use different organizations that informally collaborate and share lessons learned, instead a having single organization trying to manage the program everywhere, and therefore tightly coupling success and failure. (I am not sufficiently familiar with this to actually recommend it as a practice, but it's potentially a conceptually useful example nonetheless.)
Organizational types and needs
It is useful to better understand how and why organizational cultures differ, and one key factor is the type of work done. James Q. Wilson (who is more famous for "broken windows theory",) has a four-part typology of organizations based on whether their work and their outputs are observable. Each of these requires a different type of organizational culture to enable their missions. Production organizations have (easily) observable work and (easily) observable outputs. His example is the social security administration, with the simple task of sending checks to everyone qualified. Most factories and other industrial organizations follow a similar mold. Procedural organizations have observable tasks, but the outcomes are not observable. Wilson notes that the military during peacetime has set tasks, but until there is a war, it's unclear if task like training or equipment procurement were done well. Craft organizations have observable outputs, but these require complex skills and complex and unique tasks. Examples include many aid organizations, or disaster response. Lastly, there are Coping organizations, with unclear tasks and unclear results. This describes most central government and policy organizations, and these challenges are particularly relevant to building better governance structures and corruption reduction.
While this is a simplistic categorization, it is helpful to see that in the above list, production, procedural, craft, and coping organizations, there is an increasing degree of trust and responsibility for workers, and a increasing need for value alignment in place of strict control and procedures.
Types of Organizational Culture
Quinn and Cameron, based on their Organizational Culture Assessment Instrument, have posited that there are four central types of organizational culture, and organizations mix and combine these. The axes relate to first, whether the organization is stable and controls work, or is flexible and allows discretion, (closely related to Wilson's two-factor spectrum,) and second, whether the focus is internal and integrated, or external and differentiated. Flexible, high discretion organizations with internal focus are clan-like, while those with external focus are "adhocracies," with high degrees of dynamism and risk taking. Stable, low-discretion organizations with internal focus are hierarchies, highly structured, efficient, and controlled, while those who are differentiated and with external focus are market-organizations which are results-oriented, achievement focused, and (even internally) competitive.
Organizations usually have a dominant culture, but often different groups within the organization have their own sub-cultures. It is useful to both identify these cultures, and understand why they occur - but this isn't only a function of overall organization needs and goals. Operator-specific situational imperatives and the needs of individual tasks matter greatly.
Operator-specific pressures and culture
James Q. Wilson introduced a now-central tenet of organization theory, which is that near-term context and "situational imperatives" of work creates pressure on operators, which dictates or creates pressure on the culture. ("Operators" is often used to refer only to the lowest-level workers, but applies in slightly different ways at all levels.) For example, in international aid, workers are in general idealistic, and there are often strict rules that are imposed. Despite this, if aid workers are under significant psychological stress and time pressure, spend extended periods of time away from family, friends, and other opportunities for socialization, and have little time and opportunity for organic social lives, they are likely to find problematic outlets. Aid organizations are often are craft organizations, as mentioned above, but because of reputation and size, they often try to exert a larger degree of lower-flexibility and stabler organization. Rules like these are ineffective, because they don't change the near-term context.
Hierarchical control is minimally useful for changing situational imperatives. Different organizational styles can be used to mitigate these problems. For example, if near-term work of EA workers is conducive to bad epistemic habits, such as needing to be involved in political discourse, it may be difficult to ensure the needs for political diplomacy do not compromise the need for epistemic clarity. At the same time, it seems clear that large-enough organizations will inevitably need more hierarchical control, allow less discretion, and be less innovative.
Now that I've covered a bit of the theory, I want to focus on what I think it means for Effective Altruism-focused organizations.
Balancing size/influence and flexibility
A central challenge I see facing EA organizations that are more involved in policy, governance, and related areas is to manage the balance between large-scale organizations need for stability, predictability, and control with the non-observability and difficulty of detailed process control. Wilson notes that professionalism is a partial check on these problems, but in the decades since his observations it has become increasingly obvious that professionalism is often accompanied by inflexibility, regulatory capture, and the situational imperatives that promote control over flexibility.
A case that illustrates this is the FDA, where the evaluation of drugs leads to treating the organization as a procedural organization. The focus on following procedures minimizes risk of blame, but when combined with efficiency pressure and limited budgets, also hinders proper investigation of potential issues, leading to unhelpful conservatism in decisions. This is because the situational imperative is to finish the task, and to ensure nothing non-standard is done. This means that pharmaceutical and device manufacturing organizations must jump through many unnecessary hoops. On the other hand, relaxation of the rules leads to even further regulatory capture. Balancing the procedural strictness and discretion suffers, and this is worsened because of the management and political incentives to be overly conservative compared to optimal risk taking behavior. (Similar dynamics in this regard apply to many social service organizations, where central control leads to making denying benefits easier than approving them, but discretion leads to excessive influence of clients, and overly generous application of rules and even corruption.)
The epistemic culture around EA organizations, and the pressure to "Keep EA Weird" may be helpful to reduce values drift and the problems of professionalism. At the same time, these reinforce some of the reasons that nonprofessional organizations fail; non-observable outcomes + no process control = bad outcomes.
Coordination and the Unilateralist's Curse
EAs involved in policy work or work related to risks and long-term outcomes often need a higher degree of coordination within their own work, and with other organizations, to accomplish goals efficiently or safely. The conflict between organizational incentives, organizational cultures, and effective achievement of goals, will be particularly challenging in this context.
Several specific issues exist:
- Duplication of work is very, very common between organizations with related tasks and goals. To the extent that this can be reduced, the organizations will all benefit.
- Intellectual competition between these groups can be somewhat helpful, so it is useful for them to work independently, but avoidable and unhelpful conflict will likely result.
- Coordination is expensive in terms of time and overhead, and the types of hierarchies that reduce this burden are contentious even within individual organizations; similar structures between organizations are very challenging.
- Loose coupling of work in these areas will lead to additional organizations, projects, and individuals attempting to contribute ideas and be helpful on their own. These likely lead to values drift or misalignment, and unilateralist curse issues.
- Growing prestige will lead to a greater likelihood of badly conceived, likely harmful project.
- Clarity about goals and definitions will suffer from broader interest. (Examples; AI safety has been partially co-opted to mean reducing accidents of self-driving cars, and value alignment has been partially co-opted to discuss which group of people self-driving cars should prefer to hit. Terms like existential risk are sometimes used to refer to local disasters.)
- Note: It is near-impossible to coerce cooperation, and attempts leads to its own set of conflicts. This needs to be handled carefully.
Some of these may be obvious, and I am aware of some effort in the directions suggested here. It may be useful to have these made explicit, and they are conceptually important for reasons mentioned above, so I am listing them as suggestions.
- Central groups to coordinate and discuss issues. When Oxford's FHI, Cambridge's CSER, Boston-based FLI, and Open Philanthropy are all involved in public discussion of a given area, coordination becomes difficult without burdensome overhead.
- Organizations or representational groups dedicated explicitly to communication and coordination may be useful.
- Agreement among actors to inform others about planned projects, and to listen to dissenting opinions about the risks and benefits of such projects. (A structure like the above for this is probably useful.)
- This might be best accomplished with multi-stakeholder coordination/collaboration methods. (There is another full post that can/should be written on this topic. I have a meandering ribbonfarm post that discusses this topic here.)
- In both AI risk and in biosecurity, other organizations have additional incentives that only partially overlap with EA goals. Government and private sector actors are unlikely to explicitly coordinate, but wherever possible they should be approached for input and treated as valuable stakeholders, not adversaries. (Perceptions matter, and care is needed. As discussed above, this may be bad for epistemic health and other organizational culture issues.)
- More joint planning exercises and events such as the Asilomar Conference for AI, or the recent FHI/NTI/JHCHS meeting for biorisk policies, are useful venues for building informal networks or for starting more formal coordination bodies.
Helping independent researchers
- Clear onramps and independent project ideas for aspiring workers and independent researchers, to minimize wasted effort and encourage non-hazardous independent research. (MIRI's recent fixed point exercise series by Scott Garrabrant, and OpenAI's request for research list are potentially good examples. CHS's ELBI fellowship and the Nextgen Health Security group may be similarly useful for Biosecurity work.)
Coordination of Research
- In EA research, the current split of projects is in part dictated by the specific researchers, and projects are pursued based on each program's overall aims. This may be sub-optimal. When projects are suggested, it might be better to have them delegated or suggested to the organizations with the most appropriate structure and culture.
- Spectrum of respectability/different strengths: some organizations are better placed to innovate or suggest less-well-vetted ideas than others. (Note above suggestions for coordination, which may be particularly valuable in this context.) Other organizations are better placed to make more respected recommendations, or to interface with policymakers and governments. Allowing organizations to stake-out clear positions on this spectrum is valuable, since it likely reduces organizational culture conflict.
- Example; Open Philanthropy wants to be able to fund high-risk projects, but also encourage more policy relevant work. Structuring giving into "wild bets" versus main funding may be helpful. Some of the "wild bets" can potentially be delegated, along the model suggested by the Open Philanthropy / CEA discretionary fund grant, and Tyler Cowen's Emergent Ventures.