On the bright side, we might end up getting an AI pause out of this, if the Netherlands wakes up and decides that it no longer wants to help supply chips for advanced AI which could either be (a) misaligned or (b) controlled by Trump. See previous discussion, protest. I reckon this moment represents a strong opportunity for Dutch EAs concerned with AI risks. Maybe get a TV interview where you explain how ASML is supplying chips to the US, then explain AI risk, etc.
In terms of red-teaming my own suggestion, I am somewhat worried about further politicizing the issue of AI / highlighting national rivalries. Seems best to push for symmetric restrictions on China--they are directly supplying materials to Russia for its war in Ukraine, after all. Eliezer Yudkowsky could be an interesting person to contact for red-teaming purposes, since he's strongly in favor of an AI pause, but also seems to resist any "international rivalry" framing of AI risk concerns?
I think by the nature of how the EA Forum works, any proposed solution is likely to be more controversial than a generic "someone should do something about US politics" message. So any proposed solution will get at least a few early downvotes, causing low visibility. EAs want to upvote things which feel official and authoritative. They usually seem uninterested in improvisational brainstorming in response to an evolving situation. This will cause a paradoxical result where despite the "someone should do something about US politics" talk, proposing solutions will feel like a waste of time.
Maybe it would be good to create a dedicated brainstorming thread to try and mitigate this a little bit.
If each election is a rare and special opportunity to collect a bit of data, that makes it even more important to use that data-collection opportunity effectively.
Since we are looking for approaches which are unusually tractable, if effectiveness looks extremely murky, that's probably not what we wanted.
I expect that very novel approaches, like as described in my old post Using game theory to elect a centrist in the 2024 US Presidential Election, could be more tractable.
Vitalik Buterin's call to "let a thousand societies bloom" seems interesting.
By embracing empiricism, we address the classic utopia failure mode of a society that works in theory but not in practice.
Individuals have different preferences about their ideal society, and it seems overly controlling to prevent them from creating that society and joining other volunteers in moving there (within reason, e.g. no creation of catastrophic risks to other societies).
This could be a good "viatopia" by letting us collect data to inform later decisions. Might be possible to build RCTs into the process somehow. (If you're near-indifferent between two city-states, you could sign up to receive a stipend in order to have your residency determined by coinflip. That forms a dataset about the causal impact of moving to a particular society. Track parameters of interest like your personal happiness or production of important philosophical insights.)
A lot of AI racing is driven by the idea that the US has to stop China from getting AI because China is authoritarian. If the US was authoritarian as well, that motive for AI racing would go away. Furthermore, authoritarian countries seem predisposed to cooperate: see the China/Russia/Iran/North Korea axis. If the US became authoritarian, that could usher in a new era of US/China cooperation, to the benefit of the world as a whole.
I'm honestly a little confused about why AI would inspire people to pursue money and power. Technological abundance should make both a lot less important. Relationships will be much more important for happiness. Irresponsible AI development is basically announcing to the world that you're a selfish jerk, which won't be good for your personal popularity or relationships.
I wish people would talk more about "sensitivity analysis".
Your parameter estimates are just that, estimates. They probably result from intuitions or napkin math. They probably aren't that precise. It's easy to imagine a reasonable person generating different estimates in many cases.
If a relatively small change in parameters would lead to a relatively large change in the EV (example: in Scenario 3, just estimate the "probability of harm" a teensy bit different so it has a few more 9s, and the action looks far less attractive!) — then you should either (a) choose a different action, or (b) validate your estimates quite thoroughly since the VoI is very high, and beware of the Unilateralist's Curse in this scenario, since other actors may be making parallel estimates for the action in question.
AI governance could be much more relevant in the EU, if the EU was willing to regulate ASML. Tell ASML they can only service compliant semiconductor foundries, where a "compliant semicondunctor foundry" is defined as a foundry which only allows its chips to be used by compliant AI companies.
I think this is a really promising path for slower, more responsible AI development globally. The EU is known for its cautious approach to regulation. Many EAs believe that a cautious, risk-averse approach to AI development is appropriate. Yet EU regulations are often viewed as less important, since major AI firms are mostly outside the EU. However, ASML is located in the EU, and serves as a chokepoint for the entire AI industry. Regulating ASML addresses the standard complaint that "AI firms will simply relocate to the most permissive jurisdiction". Advocating this path could be a high-leverage way to make global AI development more responsible without the need for an international treaty.
This seems like a consideration against empowering democracies more broadly, if democracies would be controlled by the internal factions which grow their populations fastest.
It seems plausible to me that if you consider the universe of modern democratic nations, the first principal component of political disagreement within that citizenry is likely to be very intranational. (People often agree more with ideologically similar foreigners than with ideologically dissimilar co-nationals.)
In the same way US citizens often view state politics with an eye to affecting federal politics, citizens in democratic nations might view their national politics with an eye to affecting global governance. You might essentially be left with a single global polity with a single point of failure.
You argue that democracies are designed and tested to govern political power. But this sort of weird hypothetical seems fairly far from the regime that democracies have been designed and tested for.
I would suggest a very different approach: trying to move away from single-point-of-failure to the greatest possible extent, and designing global governance so it can withstand as many simultaneous failures as possible. It's especially important to reduce vulnerability to correlated failures.