TL;DR:
I think EAs are advocating for more technocratic decision-making without having formally thought about this.
This makes it an urgent priority for EAs to answer the following questions:
- What balance should policymakers strike between technocracy and populism generally?
- How, if at all, should this balance change when there is a large difference between public and expert opinion?
- How, if at all, should this balance change when moral and/or empirical uncertainty is extremely high or extremely low?
- How, if at all, should this balance change when the differences in expected value between plausible policy options is extremely high or extremely low?
Definitions:
In this post, by 'populism' I mean 'policymakers making decisions based on public opinion' and don't intend to attach negative connotations to 'populism'.
(I'd like to find a better word for this because 'populism' is a loosely defined word and has negative connotations. I considered using the word 'democracy' here, but I think the positive connotations for 'democracy' are too large and thought that 'populism' is relatively more neutral (i.e, it seems more socially acceptable to argue for more populism than for less democracy). I also think 'democracy' doesn't convey the extreme end of decisions being made on public opinion. If you can think of a better word, please suggest it).
By technocracy, I mean 'policymakers making decisions based on expert opinion'.
Post:
We can think of decisions by policymakers as existing on a spectrum from extremely technocratic to extremely populist.
My view of EA has long been that it clearly advocates for more technocratic decision-making.
The clearest example of this is an investigation into reducing populism and promoting evidence-based policy.
Apart from this, here are some subjective impressions which contribute to my view:
- many EAs think governments should use more expert surveys in decision-making
- many EAs actively seek to use their expertise to lobby policymakers, but few are pursuing 'grassroots' approaches to policy change (Note: While aiming to be the person available when policymakers want expert opinion does not favour more technocratic decision-making, actively seeking to influence policymakers does favour more technocratic decision-making)
Off the top of my head, these are the 2 main, broad, downsides to more technocratic decision-making:
- More populist decision-making reduces the risks involved in acting under moral and empirical uncertainty, by incorporating a wider range of moral theories and opinions into decision making. So, more technocratic decision-making increases the risks.
- More populist decision-making reduces the risks from experts intentionally or unintentionally advocating for unethical policies (under a wide range of moral theories) out of self-interest. So, more technocratic decision-making increases this risk. (This risk is a greater risk than the risk of >50% of the public advocating for unethical policies out of self-interest, because in expectation, unethical policies in the self-interest of ">50% of the public" would be good for more people than unethical policies in the self-interest of experts).
- Some people (including a past version of me) consider 'more democratic' decision-making to be inherently good, regardless of outcomes, because they see everyone having more equal political power as an end in itself. So more technocratic decision-making goes against this.
And this is the main, broad downside to more populist decision-making:
- Public opinion on moral theories and empirical evidence are less likely to be correct and rational than expert opinion, so decisions made on public opinion should on average have lower expected value, under certain moral theories. So, more populist decision-making on average reduces expected value.
Having looked up the terms 'technocracy' and 'technocratic' in the EA forum, they come up less often than I think they should for a movement that advocates for a move in this direction.
I also think some of the arguments in the 'decentralising risk' paper, and the responses to it in the EA forum, are debates about the extent to which decision-making should be more 'technocratic' or more 'populist', but the term 'technocratic', or the idea of a technocracy-populism spectrum, neither appears in the paper nor in the responses.
Two quotes from the paper:
"Representativeness itself says nothing about whether its philosophical pillars
are wrong or right, but it is risky to rely exclusively on one unrepresentative approach given moral, political and empirical uncertainty."
"Tying the study of a topic that fundamentally affects the whole of humanity to a niche belief system championed mainly by an unrepresentative, powerful minority of the world is undemocratic and philosophically tenuous."
In my opinion, one idea (amongst others) that these sentences get across are:
"There is unusually high moral and empirical uncertainty in decision-making for mitigating existential risk, so the risks of a highly technocratic, and less populist approach are unusually high."
The point of this post is:
- "Technocracy" and "populism" are being debated in the context of the 'decentralising risk' paper without being named. Naming these approaches, and viewing them as existing on a spectrum, will help clarify arguments.
- The fact that 'technocracy' gets named so infrequently by EAs may be a sign that many are advocating for more technocracy without realising it or without realising that the term exists, along with pre-existing criticism of the idea. Apparently someone called Rob Reich brought up the idea that EAs might prefer technocracy to democracy a long time ago.
- I think it is an urgent priority for EAs to investigate 4 important questions with regards to institutional decision-making:
- What balance should policymakers strike between 'technocracy' and 'populism' generally?
- How, if at all, should this balance change when there is an extremely large difference between public and expert opinion?
- How, if at all, should this balance change when moral and/or empirical uncertainty is extremely high or extremely low?
- How, if at all, should this balance change when the differences in expected value between plausible policy options is extremely high?
This is an urgent priority because many EAs are already taking action on the assumption that the best answer for all 4 questions is "more technocracy than the status quo", without having carefully considered the questions using evidence and reasoning.
First of all, thanks for this post. The previous post on this topic (full disclosure: I haven't yet managed to read the paper in detail) poisoned the discourse pretty badly by being largely concerned with meta-debate and by throwing out associations between the authors' dispreferred policy views and various unsavory-sounding concepts. I was worried that this meant nobody would try to address these questions in a constructive manner, and I'm glad someone has.
I also agree that there's been a bit of unreflectiveness in the adoption of a technocratic-by-default baseline assumption in EA. I was mostly a populist pre-EA, gradually became a technocrat because the people around me who shared my values were technocrats, and I don't think this was attributable to anyone convincing me that my previous viewpoint was wrong, for the most part. (By contrast, while social effects/frog-boiling were probably important in eroding my resistance to adopting EA views on AI safety, the reason I was thinking about adopting such views in the first place was because I read arguments for them that I couldn't refute.) I'm guessing this has happened to other people too. This is probably worrying and I don't think it's necessarily applicable to just this issue.
That said, I didn't know what to actually do about any of this, and after reading this post, I still don't. I think my biggest disagreement is that I don't think the concept of "technocracy" is actually very helpful, even if it's pointing at a real cluster of things.
I'm reading you as advocating that your four key questions be treated as crucial considerations for EA. I don't think this is going to work, because these questions do not actually have general answers. Reality is underpowered. Social science is nowhere near being capable of providing fully-general answers to questions this huge. I don't think it's even capable of providing good heuristics, because this kind of question is what's left after all known-good heuristics have already been taken into account; that's why it keeps coming up again and again. There is just no avoiding addressing these questions on a case-by-case basis for each individual policy that comes up.
One might argue that the concept of "technocracy" is nevertheless useful for reminding people that they need to actually consider this vague cluster of potential risks and downsides when formulating or making the case for a policy, instead of just forgetting about them. My objection here is that, as far as I can tell, EAs already do this. (To give just one example, Eliezer Yudkowsky has explicitly written about moral hazard in AGI development.) If this doesn't change our minds, it's because we think all the alternatives are worse even after accounting for these risks. You can make an argument that we got the assessment wrong, but again, I think it has to be grounded in specifics.
If we don't routinely use the word "technocracy", then maybe that's just because the word tends to mean a lot of different things to a lot of different people; you've adopted a particular convention in this post, but it's far from universal. Even if the meanings are related, they're not precise, and EAs value precision in writing. Routinely describing proposed policies as "populist" or "technocratic" seems likely to result in frequent misunderstandings.
Finally, since it sounds like there are concerns about lack of existing writing in the EAsphere about these questions, I'd like to link some good ones:
Yes I did, apologies, just corrected it.