Hide table of contents

Manuel Allgaier asked “Are there effective ways to help Ukrainians right now?” That is very interesting, but the people who ask me for donation opportunities of this sort – in light of the war – are more interested in (or open to) highly leveraged opportunities that are more likely to still be neglected and that have more accessible “surface area” as it were. I can find numerous EA Forum articles on high-level questions, but I can’t find concrete recommendations of shovel-ready funding opportunities. Does anyone know of any?

I’ve gone through these tag pages:

  1. Global governance
  2. Great power conflict
  3. Institutional decision-making
  4. Totalitarianism
  5. Risks from malevolent actors

(I’ve also skimmed the relevant sections of this 80,000 Hours post and gone through the Open Phil grant database.)

I think the first two of these tags are closest to what I would consider maximally effective in this space and maybe more generally.

Powerful global governance could make it much more realistic to institute such things as:

  1. global regulatory markets for AI,
  2. agreed-upon peaceful processes for resolving conflicts,
  3. moratoriums on dangerous AI, gain-of-function (and other) research,
  4. mandatory insurance to internalize the externalities of less dangerous research, and
  5. moratoriums on hard-to-reverse decisions such as intergalactic spreading, terraforming, and panspermia pending the resolution of all the foundational ethical and game-theoretic questions.

The problem is probably not just the strategic question of how we might put a Leviathan in charge of the whole world, but rather the question what form of governance has some of these benefits without being susceptible to being ursurped by a malevolent dictator.

Less legible, decentralized solutions are probably safer here. They are also a better model for the sort of governance that we’ll need when the lightspeed limit will make it hard (or very slow) to centrally coordinate all of our civilization.

Does anyone know of organizations or other funding opportunities who are researching anything broadly in this direction, seem competent, and need money?

Thank you!

58

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

My best guess: Rethink Priorities, specifically the longtermism department. These article titles sound very close to what I’m imagining:

  • Issues with futarchy
  • Key characteristics for evaluating future global governance institutions
  • How does forecast quantity impact forecast quality on Metaculus?
  • An analysis of Metaculus predictions of future EA resources, 2025 and 2030
  • Disentangling "improving institutional decision-making"
  • Towards a longtermist framework for evaluating democracy-related interventions
  • An examination of Metaculus' resolved AI predictions and their implications for AI timelines
  • Types of specification problems in forecasting
  • Humanities research ideas for longtermists
  • Data on forecasting accuracy across different time horizons and levels of forecaster experience
  • Intervention profile: ballot initiatives
  • Will the Treaty on the Prohibition of Nuclear Weapons affect nuclear deproliferation through legal channels?
  • Deliberation may improve decision-making
  • Would US and Russian nuclear forces survive a first strike?

My second-best guess is funding any forecasting work, such as Metaculus, but especially the innovative ideas that come out of QURI as one specific puzzle piece that is very likely to be important.

Another second-best guess is that the open-source game theory work of the Center on Long-Term Risk will become important for averting conflict escalations from highly automated forms of governance.

But I think it’s a very narrow slice of all possible futures where AI is powerful enough for open-source game theory to make the right predictions and where yet humans are still sufficiently much around that the funding opportunity is interesting for those people who are now interested in supporting benevolent global governance but are not (already) interested in supporting AI safety.

I’ve been a fan of these organizations for a long time, so I suspect that there’s the availability heuristic at work here, and there are many more excellent funding opportunities out there that I haven’t heard of.

The Simon Institute for Longterm Governance (SI) is developing the capacity to do a) more practical research on many of the issues you're interested in and b) the kind of direct engagement necessary to play a role in international affairs. For now, this is with a focus on the UN and related institutions but if growth is sustainable for SI, we think it would be sensible to expand to EU policy engagement. 

You can read more in our 2021 review and 2022 plans.  We also have significant room for more funding, as we have only started fundraising again l... (read more)

3
Dawn Drescher
2y
Awesome, thanks! I’ll have a look at the documents.

I would really like to see more on Reducing long-term risks from malevolent actors

Yeah, we should improve our institutions and foster democratic values... but when all is said and done, I'm still surprised people often tolerate, or even embrace, displays of dark traits in their leaders - even though they'd hardly accept similar behaviour from their peers.

I don't see any current effective way to protect ourselves against this sort of Big Man politics. Political scientists and philosophers sort of hope to constrain it with mechanism design, but I don't think this can work without people in power consciously willing to resist would-be tyrants;  there should be ways to finish them in their political cradle. 

I find particularly sad that it looks like we're not so much better than Athenians in the Peloponnesian war trying to resist Alcibiades, or Romans trying to resist Caesar.

(even more depressing: I think faster than light travel is more frequent in sci-fi than political institutions that are robust against autocrats)

I wonder whether this is related to the authority dimension of Moral Foundations Theory? If so, it doesn’t seem to be universal – just very common. Makes me tentatively a bit hopeful.

Great, thank you! I’ll read it in full, but for now this was key for me:

For instance, our top choice is the Initiative on Global Markets (IGM), a research center at the University of Chicago Booth School of Business. Specifically, their “Economics Experts Panel”, regularly polls top economists on economic policy questions. A philanthropist could fund this project so that it can be expanded. Basing economic policy on expert consensus should be robustly positive.

I found the research of Founders Pledge quite interesting:

Curated and popular this week
Relevant opportunities