I think we should be willing to embrace a system that has a better mix of voluntary philanthropy, non-traditional-government programs for wealth transfer, and government decisionmaking. It's the second category I'm most excited about, which looks a lot like decentralized proposals. I'm concerned that most extant decentralized proposals, however, have little if any tether to reality. On the other hand, I'm unsure that larger governments would help, instead of hurt, in addressing these challenges.
I claim that "fixing" coordination failures is a bad and/or incoherent idea.
Coordination isn't fully fixable because people have different goals, and scaling has inevitable and unavoidable costs. Making a single global government would create waste on a scale that current governments don't even approach.
As people get richer overall, the resources available for public benefit have grown. This seems likely to continue. But directing those resources fails. Democracy doesn't scale well, and any move away from democracy comes with corresponding ability to abuse power.
In fact, I think the best solution for this is to allow individuals to direct their money how they want, instead of having a centralized system - in a word, philanthropy.
I've actually done this, and talked to others about it. The critical path, in short, is reliable vaccine, facilities for production, and replication for production.
But this has nothing to do with your announcing your candidacy for office - congratulations on deciding to run, and good luck with your campaign!
Also, strongly agree on #3 - see my post from last year: https://forum.effectivealtruism.org/posts/yQWYLaCgG3L6H2Lya/challenges-in-scaling-ea-organizations
It's the only time I can remember where it seems unfortunate that EA as a movement is good at planning and ensuring that critical nonprofits have sufficient runway.
Re: #2, I've argued for minimal institutions where possible - relying on markets or existing institutions rather than building new ones, where possible.
For instance, instead of setting up a new organization to fund a certain type of prize, see if you can pay an insurance company to "insure" the risk of someone winning, as determined by some criteria, and them have them manage the financials. Or, as I'm looking at for incentifying building vaccine production now, offer cheap financing for companies instead of running a new program to choose and order vaccines to get companies to produce them.
Politics! (See linked post.)
1) There's an entire Global Health Security Agenda that has been shouting about what needs to be done for a decade, as have many other organizations - CHS, the US's Blue Ribbon Panel, Georgetown's GHSS, and I'm sure other places internationally. Ask them where to spend your money, or better yet, read their previous reports that already tell you what needs to be done.
2) For groups that are willing to think about biosecurity risks, or take advice from people who do, think about differential tech development when picking technology to fund. There are lots of technologies that have a clear upside, and almost no downside - biosurveillance, diagnostic technology, vaccine platforms, etc. Don't fund research into gain of function, and try to limit and weigh carefully when deciding what potential dual-use technology to fund.
3) For government decisionmakers - don't throw money into new bureaucracy. We have lots of existing bureaucracy, much of which should be reformed, but replacing it with a new structure and adding layers isn't going to help. And in the US, don't allow a post-9/11 move like what led to building the DHS.
People should be working on funding proposals for Bio-X risk mitigation policies, such as greater international coordination, better health monitoring systems, investment in non-disease specific symptomatic surveillance, and similar. These are likely to be far easier to fund in 3-6 months, as a huge pool of money is allocated to work on fixing the next pandemic.
I personally, writing as a superforecaster, think that this isn't particularly useful. Superforecasters tend to be really good at evaluating and updating based on concrete evidence, but I'm far less sure about whether their ability to evaluate arguments is any better than that of a similarly educated / intelligent group. I do think that FHI is a weird test case, however, because it is selecting on the outcome variable - people who think existential risks are urgent are actively trying to work there. I'd prefer to look at, say, the views of a groups of undergraduates after taking a course on existential risk. (And this seems like an easy thing to check, given that there are such courses ongoing.)