Hide table of contents
Should EAs be fighting for better political systems and better policy making? For governance where the decision makers, at minimum, are incentivised to act in the best long-term interest of the population?
 

At first glance this could be super important.

  • If you think policy makers should be putting in place policies that are actually good for the population,

  • If you think governments should care about long term risks to humanity,

  • If you think more prioritisation research should be done into how best to use resources, [1]

  • If you think that we need to see significant systemic changes,

then trying to improve how Governments work could leverage huge amounts of action and resources going towards these goals.

 

But what does creating better political systems actually look like? Is this even feasible? Is this really a high impact use of time?

 

 

The key features of better politics and policy 

Most of the basic features of a good political system are in place in most developed countries. We mostly have democracy, universal suffrage, peaceful transfers of power, an independent rule of law, term limits, free speech, free press, no excessive education controls, limit on Government's’ powers, and so on. (In fact for those of us living in countries where there above applies we should be pretty damn grateful to have those basics feature - it really could be so much worse.)

 

That said, if we want to upgrade form the basic package there are a few other features of a well designed political system it would be really nice to have. Such as:

  1. Honest leaders and campaigns.

  2. Minimise political tail end risks. Prevent the rise of Hitler type figures who can turn a democracy into a monstrous dictatorship of suffering, or of individuals who may be trigger happy on pressing the big red doom the earth button.

  3. Leaders who care about the long-term future of humanity and will deal with issues that need international cooperation (eg. climate change) the other risks (eg. AI).

  4. No tyranny of the majority situations (eg. banning harmless religious actions).

  5. Disincentive politicians from rhetoric or actions that will help them win elections but are not in the best interest of the population (eg. a policy that will benefit voters in swing locations but harm other voters)

  6. The use of evidence in policy making and the implementation of policies that actually work to achieve the stated goals.

  7. Allow limited low-risk improvements to the system (such as improving voting systems).

 

 

Initial research on what should change and how to create change

But those all sound impossible, can we actually create a system that will deliver all of the above?

 

Unfortunately I am going to give the usual EA cop-out answer: we should do more research in this area.

 

We should research:

  • Prioritisation questions: what is most important about a working political system (this could mean adding to and ordering the list above)? Which countries are the most important and tractable to focus time and effort on?

  • Good political systems: what are the practical things that could be implemented in a democratic system that provide the bonus features listed above.

  • Implementation tactics: What tactics are needed to create change that leads to good political systems.

 

Luckily for anyone interested I’ll give you a kickstart by sharing my (not entirely uninformed) opinions on the above. What more could you need?

 

Prioritisation questions: I think the most important thing will be to have systems that limit extreme risks, as I am genuinely worried we are not going to be able to keep humanity alive that  much longer, and to focus on more powerful countries, as these are the ones that have the most power to create change and lead the way. But I have no strong views on this.

 

Good political systems: I have slightly more nuanced views on this. In brief some of the key features of a good political and policy system would be:

  1. An unelected part to the system that can veto or delay very bad decisions, such as the UK House of Lords.

  2. A constitution / internationally agreed human rights bill / etc that gives additional power to the independent judiciary to limit extreme actions by the state.

  3. Transparency. Not necessarily 100% transparency but more transparency about how political decisions are made. (Eg. earlier release of public records)

  4. Better legitimised whistleblowing systems for civil servants and others who might see bad political behaviours

  5. Safeguards to allow party elites to push out strongly disagreeable leadership. See the topical example below.

  6. Evidence enforcing bureaucracy. For example making civil servants have to fill out a form explaining the evidence for and against a policy (like the UK's better regulation framework).

  7. A code of conduct and training for civil servants that encourages good decision making and honesty.

  8. All new policies are reviewed a set amount of time after they are introduced

  9. Public consultation for new policies

  10. Avoiding systems like primaries where a politician may have to take a more extreme view to win their party leadership than they would take otherwise.

  11. Limit the things the public can vote on. Democracy works best when the population votes on refined choices

  12. A system whereby party leaders have to have held office previously. Eg in the UK you have to be an MP before you can be made party leader.

  13. More power moved out of the hands of politicians to technocrats in certain areas. For example, the Bank of England playing an independent role to ensure financial stability.

  14. Better and more representative voting systems.

 

Happy to elaborate on any of these if people want. Worth bearing in mind that these kind of things will vary from country to country

 

Implementation tactics: Highly speculative and I am not an expert in lobbying (especially outside the UK). I think I suggest beginning by building credibility with high quality research in this area, perhaps setting up (or working with) a respected think tank. From there I would just sit back and wait for opportunities to arise. Wait until such topics are being discussed or on the political agenda and then start influencing at that point. Alternatively it could be worth looking for any policy changes that whilst seemingly small or uncontroversial but may have a significant impact if implemented. How to have influence would depend on the situation and could be grassroots, political networking, legal and so on.

 
 

A topical example of how recent political tragedy could have been avoided.

Without meaning to offend anyone who has differing political views than me, I think it is fair to say we have all recently seen and been shocked by a tremendous political catastrophe. Yes I am of course talking about the disintegration of the UK Labour Party.

 

The previous Labour leader, Ed Miliband, changed the way by which labour leadership is decided. The leadership was changed to a public vote from a vote split 1/3 unions, 1/3 MPs and 1/3 public. In order to make sure they did not end up with a leader that all the MPs hated or could not work with they introduced a safeguard that a leader would have to have the support of 50 MPs to run. They did not however think to put in any safeguards to give MPs power to kick out a leader, even if every almost every MP found them difficult to work with. And the current shambles is a result of that. [2]

 

Perhaps there is with hindsight bias and a lot of hubris leading me to thank that this should have been noticeable. But it certainly feels like this is the kind of thing that a smart person, with a good knowledge of how political systems worked and a long term mindset, could have spotted. Especially if that person had some time to do some research whenever a major party in a major country was changing how they decided on their leader. This is a mistake other parties have made and corrected (such as the conservative party’s 1922 committee). Furthermore it should not have been hard in this case to make the case to the relevant decision makers.

 

 

 

Further thoughts on tractability

How tractable this cause area is, is incredibly hard to guess. The example above makes it seem like opportunities to make changes do arise. It might be worth looking to see if there are many other past examples of situations where it feels like changes could have been made. If any EAs were to work on this it could be worth making predictions about how many future such cases actually come up, where EA influence could potentially help.

It is also my experience from my time in Government that there are many very small changes that could be implemented that could improve decision making. I have seen better and worse decisions made and a variety of checks and balances in place that worked and did not work to varying degrees. To give one innocuous example from an area I worked in: the UK Treasury publishes a Tax Impact and Information Note (TIIN) for all tax policy changes, except for changes to local taxes. Asking for this TIIN system to be extended to local taxes seems simple and should positively impact decision makers incentives.
 

One counter consideration is that it may be very difficult to know what the actual impact of changes will be. For example you could push for more transparency of documentation and it may lead to less information being written down and worse decisions being made.

 

 

Is it worth EAs time to focus on this?

Some things to consider are:

Scale. HUGE. Governments have a lot of resources that they can put towards addressing global problems. Improving how they use those resources would impact the flow of trillions of dollars, and could impact every area of what a government does from social care to climate policy. The flow through effects seem very large.

 

Neglected. Negelectedness seems to perhaps be the wrong thing to consider with political changes issues. If an issue is too neglected it may be difficult to create the pressure needed for change. The ideal would either be that there is public sentiment that change is needed but no clear leader pushing for change, or that the research in this area is not being pushed hard enough or is not being carried out with an EA mind-set. There are already a few organisations working on evidence based policy making, voting systems and transparency. Could do with someone looking more into what exists in this area and what the gaps are.

 

Tractability. Fairly low. Discussed above.

 

Measurability. This is really really hard to measure the success of. In general working out the impact of policy influencing is difficult: it is hard to know if your actions will create change, if change happens it is difficult to know if your contribution was a causal factor. You may need someone competent working on this for years to even begin to make any wins. Furthermore if we create changes that lead to better changes at some invisible point in the future or mitigate risks it may be impossible to see the end impact of this.

 

Why would EA thinking be uniquely placed to consider this? This uses both long term thinking about extreme consequences (not putting in the right safeguards into political decisions will likely have no immediate effect but may lead to catastrophe a few hundred years down the line) and meta-thinking (it may make more sense to try to improve governance systems than panic when a poor politician is standing or a poor decision is made and wonder how it reached this point) which are areas EAs focus on.

 

My biases and thoughts: I have had an interest in this topic since shortly after I begun working in policy. I guess recent events have triggered me to actually writing some views on this.

 
 

Responding to recent political situations

Many EAs are concerned about current politics. A response to this could be by trying to change the current situation in some way or by trying to change the systemic issues that led to the current situation. Trying to tackle the systemic issues, to prevent the rise of potential future dangerous demagogues or problematic parties seems more neglected and plausibly a better way to have an impact than fighting the current situation (unless you think the current situation is super dangerous). The question then would be which things about the modern world should we try to change. We could look to improve: the values of the public, the political systems, the media, big business, or other factors. I have no strong views on this; I have focused in this piece on improving politics but would be interested in how you can improve each of the above. [3]
 

Furthermore recent political changes that have left vast swathes of the population upset and shocked. A question to ask would be do these changes present an opportunity for doing good. Can we capitalise on and direct people's passion and disappointment, for these or for the potential future political issues? [4] Can (and should we) we do this in a non-partisan way that builds support from across the political spectrum to improve political and other systems? [edit:] How do we direct people's passion towards the most useful cause, and is this the cause to direct them towards?

 
 

Conclusions

Some conclusions (section edited [5]) :
  1. This topic should be a key focus area for EAs. There should be more research by individual EAs, by CEA, by Open Philanthropy, and so on.
  2. The EA community's responses to recent political events have been poor: fractured and slow and unfocused. We should assume there will be another event that will rile people up (French elections? Something else?) and work fast now to be more ready direct people's passions in the optimal direction (such as towards better political systems). More generally we are wasting too much time discussing non-neglected political issues.
  3. This is clearly more impactful than other policy areas (except where there is an immediate risk to counter, such as arguably bio-security policy). It is clearly higher impact to change and improve how policy is made than to change the kind of areas of policy that the Open Philanthropy Project are looking into, (such as economic stability policy or prison reform or animal welfare).
 
Tell me why I am wrong on points 1 to 3 above. Tell me if you strongly agree. Your feedback would be super-helpful.
 

If anyone wants to work on this or fund this then get in touch: samueljhilton@gmail.com

 
 

–-–-–-–-–-–-–-–-–-––-

 

With thanks to Alexander Gordon Brown and Robert Collins for input.

 
[1] Bear in mind that states are by far the biggest spender by a long way on what can loosely be considered prioritisation research and on risk research. Sates constantly have to think about how to spend money and resources to create the biggest impact although perhaps with a slightly skewiff view of what is impact and they may not necessarily implement the most high impact option for a range of reasons.
 

[2] I have not looked at this in detail and perhaps over simplify the situation. For more on this see: http://www.newstatesman.com/politics/elections/2015/09/why-did-labour-use-system-elect-its-leader and http://www.newstatesman.com/politics/uk/2015/05/how-will-labour-leadership-election-work

 

[3] Some recent discussion on how to improve the media can be found at: https://www.facebook.com/groups/effective.altruists/permalink/1213547848701570/

 

[4] Past thinking on this is here.

 

[5] I initially gave this article the mild mannered (and more honest) conclusion that: "this could be a powerful thing for EAs to work on, but the case for doing so is still weak. At the minimum more research could be useful" but decided to instead to try to spark more debate. I personally do not fully agree with points 1 to 3 and think I am overstating the case.

Comments6


Sorted by Click to highlight new comments since:

I’m fairly new to the EA Community and have been surprised at the lack of attention to political systems in the EA portfolio. I believe the effective political systems are critical to human thriving in both the immediate and longer term, and to managing developments such as AI and biotechnology for benefit rather than harm. However, developing and implementing political systems fit for the 21st century would seem to raise major challenges – to give a few obvious examples:

• Many of the issues EAs are focussing on (AI, biorisk, global warming) can only be addressed well through effective global governance. At present our institutions for global and supranational governance are struggling and nation states are looking inwards.

• There is good evidence (eg as cited in the book ‘The Spirit Level’) that happy thriving citizens are correlated with high levels of trust in government and public institutions. Recently many nations experienced a serious decline of public trust in governments, this needs attention

• As a result of rising inequality and static living standards for the middle classes over the last 30 years, serious questions are being asked about the future of capitalist liberal democracies - growing discussion of postcapitalism and how it might be transitioned to etc

• There are also questions about whether democratic systems in which governments are subject to election every 4/5 years are able to effectively manage the impact of paradigm shifting development such as AI, biorisk and climate change

• Experts are suggesting AI is likely to replace vast numbers of jobs, raising big questions for who benefits from these technologies, future of work and more philosophical questions about meaning of human existence without work. Ideas such as universal basic income being proposed as possible responses. Huge potential avoidable human suffering if this is managed 'badly'

So I’m suggesting EAs should give more attention to political systems as scope is huge, the area is somewhat neglected (particular in connecting academic work with practical politics) and probably underfunded. Tractability can probably be improved particularly by seeking to raise public awareness and understanding of medium term challenges.

I would be interesting in getting involved in work, and potentially donating

I would be interesting in getting involved in work, and potentially donating

Hi Colin B, been thinking about next steps. Any chance you could get in touch (email policy@ealondon.com).

Excellent post, thank you!

I think the main problem is that it's hard to implement any substancial change in political systems of great powers, such as US and UK - precisely the kind of target one should have in order to have a huge impact.

If we want to change political systems, we would have to start from below: small countries (maybe Estonia - they're so pro-inovation) and private associations. Has anyone ever heard of a legal statute on private orgs voting systems?

I strongly agree with conclusions 2 and 3. I am inclined to agree with 1, but am less sure. My main concern is about tractability - influencing policy is hard to do to begin with, and changing the underlying mechanisms by which policy is made is even harder. But the potential impact is so enormous - it potentially has multipliers for every other EA issue - that I'm inclined to think it should be a major area of focus.

Unintended are harder for campaigns to avoid than even governments from where I'm sitting. But yes worth looking at more and yes I'm interested. Nice post.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in Policy
20
Eva
· · 1m read