TL;DR:
I think EAs are advocating for more technocratic decision-making without having formally thought about this.
This makes it an urgent priority for EAs to answer the following questions:
- What balance should policymakers strike between technocracy and populism generally?
- How, if at all, should this balance change when there is a large difference between public and expert opinion?
- How, if at all, should this balance change when moral and/or empirical uncertainty is extremely high or extremely low?
- How, if at all, should this balance change when the differences in expected value between plausible policy options is extremely high or extremely low?
Definitions:
In this post, by 'populism' I mean 'policymakers making decisions based on public opinion' and don't intend to attach negative connotations to 'populism'.
(I'd like to find a better word for this because 'populism' is a loosely defined word and has negative connotations. I considered using the word 'democracy' here, but I think the positive connotations for 'democracy' are too large and thought that 'populism' is relatively more neutral (i.e, it seems more socially acceptable to argue for more populism than for less democracy). I also think 'democracy' doesn't convey the extreme end of decisions being made on public opinion. If you can think of a better word, please suggest it).
By technocracy, I mean 'policymakers making decisions based on expert opinion'.
Post:
We can think of decisions by policymakers as existing on a spectrum from extremely technocratic to extremely populist.
My view of EA has long been that it clearly advocates for more technocratic decision-making.
The clearest example of this is an investigation into reducing populism and promoting evidence-based policy.
Apart from this, here are some subjective impressions which contribute to my view:
- many EAs think governments should use more expert surveys in decision-making
- many EAs actively seek to use their expertise to lobby policymakers, but few are pursuing 'grassroots' approaches to policy change (Note: While aiming to be the person available when policymakers want expert opinion does not favour more technocratic decision-making, actively seeking to influence policymakers does favour more technocratic decision-making)
Off the top of my head, these are the 2 main, broad, downsides to more technocratic decision-making:
- More populist decision-making reduces the risks involved in acting under moral and empirical uncertainty, by incorporating a wider range of moral theories and opinions into decision making. So, more technocratic decision-making increases the risks.
- More populist decision-making reduces the risks from experts intentionally or unintentionally advocating for unethical policies (under a wide range of moral theories) out of self-interest. So, more technocratic decision-making increases this risk. (This risk is a greater risk than the risk of >50% of the public advocating for unethical policies out of self-interest, because in expectation, unethical policies in the self-interest of ">50% of the public" would be good for more people than unethical policies in the self-interest of experts).
- Some people (including a past version of me) consider 'more democratic' decision-making to be inherently good, regardless of outcomes, because they see everyone having more equal political power as an end in itself. So more technocratic decision-making goes against this.
And this is the main, broad downside to more populist decision-making:
- Public opinion on moral theories and empirical evidence are less likely to be correct and rational than expert opinion, so decisions made on public opinion should on average have lower expected value, under certain moral theories. So, more populist decision-making on average reduces expected value.
Having looked up the terms 'technocracy' and 'technocratic' in the EA forum, they come up less often than I think they should for a movement that advocates for a move in this direction.
I also think some of the arguments in the 'decentralising risk' paper, and the responses to it in the EA forum, are debates about the extent to which decision-making should be more 'technocratic' or more 'populist', but the term 'technocratic', or the idea of a technocracy-populism spectrum, neither appears in the paper nor in the responses.
Two quotes from the paper:
"Representativeness itself says nothing about whether its philosophical pillars
are wrong or right, but it is risky to rely exclusively on one unrepresentative approach given moral, political and empirical uncertainty."
"Tying the study of a topic that fundamentally affects the whole of humanity to a niche belief system championed mainly by an unrepresentative, powerful minority of the world is undemocratic and philosophically tenuous."
In my opinion, one idea (amongst others) that these sentences get across are:
"There is unusually high moral and empirical uncertainty in decision-making for mitigating existential risk, so the risks of a highly technocratic, and less populist approach are unusually high."
The point of this post is:
- "Technocracy" and "populism" are being debated in the context of the 'decentralising risk' paper without being named. Naming these approaches, and viewing them as existing on a spectrum, will help clarify arguments.
- The fact that 'technocracy' gets named so infrequently by EAs may be a sign that many are advocating for more technocracy without realising it or without realising that the term exists, along with pre-existing criticism of the idea. Apparently someone called Rob Reich brought up the idea that EAs might prefer technocracy to democracy a long time ago.
- I think it is an urgent priority for EAs to investigate 4 important questions with regards to institutional decision-making:
- What balance should policymakers strike between 'technocracy' and 'populism' generally?
- How, if at all, should this balance change when there is an extremely large difference between public and expert opinion?
- How, if at all, should this balance change when moral and/or empirical uncertainty is extremely high or extremely low?
- How, if at all, should this balance change when the differences in expected value between plausible policy options is extremely high?
This is an urgent priority because many EAs are already taking action on the assumption that the best answer for all 4 questions is "more technocracy than the status quo", without having carefully considered the questions using evidence and reasoning.
In at least one particular case (AI safety), a somewhat deliberate decision was made to deemphasize this concern, because of a belief not only that it's not the most important concern, but that focus on it is actively harmful to concerns that are more important.
For example, Eliezer (who pioneered the argument for worrying about accident risk from advanced AI) contends that the founding of OpenAI was an instance of this. In his telling, DeepMind had previously had a quasi-monopoly on capacity to make progress towards transformative AI, because no other well-resourced actors were working seriously on the problem. This allowed them to have a careful culture about safety and to serve as a coordination point, so that all safety-conscious AI researchers around the world could work towards the common goal of not deploying something dangerous. Elon Musk was dissatisfied with the amount of moral hazard that this exposed DeepMind CEO Demis Hassabis to, so he founded a competing organization with the explicit goal of eliminating moral hazard from advanced AI by giving control of it to everyone (as is reflected in their name, though they later pivoted away from this around the time Musk stopped being involved). This forced both organizations to put more emphasis on development speed, lest the other one build transformative AI first and do something bad with it, and encouraged other actors to do likewise by destroying the coordination point. The result is a race to the precipice [PDF], where everyone has to compromise on safety and therefore accident risk is dramatically more likely.
More generally, politics is fun to argue about and people like to look for villains, so there's a risk that emphasis on person-vs.-person conflicts sucks up all the oxygen and accident risk doesn't get addressed. This is applicable more broadly than just AI safety, and is at least an argument for being careful about certain flavors of discourse.
One prominent dissenter from this consensus is Andrew Critch from CHAI; you can read the comments on his post for some thoughtful argument among EAs working on AI safety about this question.
I'm not sure what to think about other kinds of policies that EA cares about; I can't think of very many off the top of my head that have large amounts of the kind of moral hazard that advanced AI has. This seems to me like another kind of question that has to be answered on a case-by-case basis.