Typically, there are five major subfields within political science (as practiced in the US):

1) American Politics

2) Comparative Politics

3) International Relations

4) Political Theory

5) Quantitative Methods

Within quantitative methods, there are two major subdivisions:

  1. Statistical Methods
  2. Formal Modeling

Statistical Methods: This primarily covers applications of probability, statistics, survey research, machine learning, and causal inference to political science. For the most part, it is actually applied quite well within political science. My only minor suggestion is that there be more graduate student training on the theoretical underpinnings of probability, statistics, machine learning, and causal inference. There is a lot of excellent training for political science graduate students on a) what is in the current probability/statistics/survey-research/machine-learning/causal-inference toolbox, and b) knowing when and how to deploy different tools in the toolbox. This application training is the more important for political science than theoretical training, because political scientists are not typically expected to develop new tools for the toolbox. However, it would be desirable if graduate students were given more training in theoretical foundations, because as the tools in the toolbox change, that theoretical training would allow political scientists to adapt over the decades of their career. I'm listing some books which I think we greatly improve this training.

  1. Causality: Models, Reasoning, and Inference by Judea Pearl   
  2. Foundations of Modern Probability by Olav Kallenberg
  3. Mathematical Statistics: Volumes 1 & 2 by Peter Bickel and Kjell Doksum

Formal Modeling: This covers quantitative methods such as game theory, simulations, expected utility theory, fair division, social choice, and mechanism design. EA could use a lot of insight from political science. Here are some of those areas:

  • Approximation Algorithms for Finding Nash Equilibria in Social Games
    • Christos Papadimitriou and his colleagues published a few papers about fifteen years ago regarding the computational intractability of finding Nash equilibria in real world games. We can basically see this in simple games like chess and go, which are computationally intractable. DeepMind can crush any human world champion in these games, not because DeepMind knows optimal play, but because it uses approximation algorithms to improve its play. The vast majority of social games in politics are much more complex than chess or go, so clearly, even agents seeking to maximize expected utility will use approximation algorithms in real world politics. To improve the predictive power of our models of real world politics, getting a better sense of what approximation algorithms are used and the probabilities that they use them in particular cases will significantly improve the predictive capacities of our models. This will entail the greater use of algorithmic game theory in political science formal modelling.
  • Redistricting Processes
  • Rethinking Mechanism Design for Social Sciences
    • Roughly speaking, given that agents have the capacity to misrepresent their beliefs or to act strategically, mechanism design is the development of mechanisms to produce desirable goals. For example, Vickrey auctions are a mechanism used to prevent overbidding caused by strategic bidding. Mechanism design in the social sciences has drawn a lot from cryptography. In cryptography, there is great importance on making it as computationally intractable as possible to break the code. In a similar vein, there has been great emphasis in social science mechanism design to make it as computationally intractable as possible to figure out optimal strategy. For example, it is claimed that some group decision making algorithms are superior to others because they are computationally intractable for a player to figure out optimal strategy. But the goals of cryptography and social processes can be different. For example, in cryptography, the game is typically zero-sum in the sense that you either break the code or don't break the code. In social games, like group decision making, the goal is not to have optimal strategy, but to have a good enough strategy to beat all your opponents. First a CS example, and then a political one. Consider a mediocre chess player like myself teaming with DeepMind resources to play the human world chess champion unaided, Magnus Carlsen. I would absolutely crush Carlsen, because even though DeepMind resources don't know optimal chess play, they know enough to beat any unaided human player. Similarly in elections, some election method may be computationally intractable for determination of optimal play, but perhaps a coalition of voters aided by Cambridge Analytica resources could be good enough to crush competing coalitions that are at a comparative computational resource disadvantage. The point is, we need new formal metrics for the goals we might have. Saying that such and such a group decision making mechanism is factorial time complex to find optimal strategy in worst case scenario (i.e., a more formal way of saying optimal strategy is computationally intractable) doesn't make a mechanism any better than an other. We need other kind of metrics, useful for social sciences, like a formal version of "it is easy for all players to find optimal strategy regardless of computational resource equality" (which would be a way of saying a well trained chicken and DeepMind are on even ground competing in tic-tac-toe), or formal criteria for "any-given-level-of-optimality-of-play complexity is practically resource invariant".

Those are just some comments on quantitative methods in political science and EA.  I look forward to your comments. Thank you.

12

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since: Today at 2:33 PM

Interesting post- since my academic training is heavily in political science (+ stats and CS), I’ve thought about this topic some as well. Disclaimer is that I engage with poli sci research pretty heavily through working in electoral politics/follow broader PS through friends who do other work, but I don’t have a poli sci PhD and don’t have a particular identity as a political scientist.

A general thought here is that this post is a little hard to engage with because you’re making two related claims at the same-ish time, and not providing particularly concrete suggested actions specifically related to EA. As I read you, the claims are:

  1. EAs could benefit from familiarity with the formal modeling literature in those 3 areas. It’d be helpful to have some sense of how you envision these being leveraged.
  2. Poli Sci programs (it seems especially PhDs, but I’m reading in there) could produce stronger quantitative researchers who are more equipped to handle new developments in quant methods by deepening engagement with 2a) Foundations of Probability Theory 2b) DAG based causal inference. Not sure there’s an EA related claim here as written.

One thing I’m especially left wondering here is whether you have a specific claim about how relatively important engaging with these topics is, and for which parts of the EA community that’s true. For example, how much of a priority should engaging with the gerrymandering literature be, and for which EAs? Where does this fall in the hierarchy of things EAs could spend time learning about versus say, microeconomic quant tools? Hopefully that’s a helpful point in trying to flesh out the case you’re making here (I realize you posted this as “some thoughts”, and not “here is a deeply researched, group reviewed long form piece with deeply felt calls to action”.)

Moving on to discussing the specific points you make:

  1. On teaching Pearl more- I broadly agree this is a good idea. The most common educational background on my team at the senior level is a poli sci PhD and I interview a decent number of political science PhDs. It seems many folks know a little bit about Pearl’s work and those that do benefit from it, but never had DAGs taught deeply and formally. I think there are signs of this changing in some programs (I don’t have the knowledge to make a general discipline level claim), with a move towards teaching both PO and DAG approaches jointly. I certainly benefited from being taught both together, but I got this in a stats department.
  2. On teaching more probability theory- I believe there are some programs where this is available either directly or through partnership with other departments, and I’m much less confident in a general claim like “all quant poli sci educational programs should teach more of this”. I think the more a prospective students wants and expects to work on methods development, the more this should be emphasized, but I guess my (uncertain) belief right now is deeper education here is available to those who want it, and the discipline does a pretty good job of prioritizing things to teach the average student.
  3. On gerrymandering research- Your suggestion is roughly a “quiver” of more objective methods. My (non-expert) impression is that there are a number of such available tools proposed, even once you get past the somewhat hamfisted solutions like shortest splitline that completely ignore the complex and competing demands that legal precedent place on redistricting. My impression is that these tools are already sufficient to be more fair and objective than current practice, but that implementing them is a problem of political will and organizing (that’s not to say there isn’t promising research being done to improve solutions). So the challenge here to me is how EAs should choose to spend their time given this dilemma- it’s not clear to me that getting improvements implemented in the US is particularly tractable at the moment, and thus I’d argue likely not suitable as a recommendation for broader work.

To clearly caveat with my level of knowledge here, my undergraduate thesis was on why fixing gerrymandering is harder than proposing good algorithms, and I learned quite a bit after that from seeing researchers speak at the MaDS seminar series while I was in grad school at NYU. So I have a decent impression, but you may well know more and have a good basis to disagree.

I’m completely unequipped to respond on the other formal methods ideas you propose, but looping back to the broader response I have to this post, it would be beneficial to have more concrete applications of these ideas for EA, as am well as discussion of how they rank in priorities of things we could learn.

This is a pretty long response already, so will end by saying that this is definitely a topic I’d be interested in discussing more.

For example, I could envision trying to seek out specific EA problems that could benefit from recent hot topics in quant poli sci like conjoint experiments (to name one example). Separately, this is a more a intersection of my background (political practitioner) and quant poli sci, but I’ve been pondering wether it’s a good use of time to produce general educational materials on better understanding campaigning effectively and how elections are won- it seems many EAs fall prey to many of the common misconceptions that typical well-educated but not politically experienced people fall into. To the extent there are folks who might try something like another Flynn campaign or try to give effectively in influencing the 2024 cycle, there seem to be some easy wins in providing better mental models.

Curated and popular this week
Relevant opportunities