Does EA do any work to change our inadequate society into an adequate society? Is there any way to get involved with that? Any ongoing projects aiming at it? Any planning happening?

Note: If you’re not familiar with inadequate societies, see Inadequate Equilibria and Hero Licensing by Eliezer Yudkowsky.

New Answer
Ask Related Question
New Comment

5 Answers sorted by

Nov 09, 2022


My current subjective opinion is - no they don't do a lot and this is potentially a huge mistake. (But I am also uncertain about AI timelines and xrisk so that could be a reason I'm wrong.)

Some thingz you can look into

  • if you have short AI timelines you only want to focus on the kinds of inadequacies most relevant to AI risk. AGI governance in general seems a very open question that you can start work in and get support for
  • there is work on epistemics - forecasting, prediction markets, etc
  • there is Ian David Moss' EIP
  • some miscellaneous articles, for instance Rethink Priorities has posts on futarchy, deconfusing improving institutional decision making etc

Jackson Wagner

Nov 10, 2022


My comment here lists a number of EA efforts that are aimed at general institutional reforms of various sorts:

Another notable recent project is Balsa Research:

But despite the above, I still think that EA should be thinking much bigger in this direction; civilizational adequacy (sometimes known as "improving institutional decisionmaking" in EA circles) should IMO be elevated to a top-tier cause area alongside global health, biosecurity, and animal welfare (but not displacing AI as #1).

See my team's winning entry in the Future of Life Institute's "AI worldbuilding competition" for a more detailed vision of how I think charter cities, prediction markets, and other big ideas for improving civilizational adequacy might help create a better world:


Nov 19, 2022


I feel like quite a few people are working on things related to this, with approaches I have different independent impressions about, but I'm very happy there's a portfolio.

Manifold Markets, Impact Markets, Assurance Contracts, Trust Networks, and probably very obvious stuff I'm forgetting right now but I thought I'd quickly throw these in here. I'm also kinda working on this, but it's in parallel with other things and it mostly consists of a long path of learning and trying to build up understanding of things.


Nov 09, 2022


The Effective Institutions Project might count as this. There may be more relevant projects, depending on what counts - like the Simon Institute for Longterm Governance, the Center for Election Science.

The kinds of things filed under "Broad Longtermism", perhaps.

Maybe work on impact markets and prediction markets.
(For some reason I didn't fully read acylhalide's answer and I see that I listed some of the same things.)


Nov 10, 2022


Roote views itself as part of a meta-movement including EA, and is interested in societal systems change (see Marriage Counseling with Capitalism as an example). We've been working on a few projects and are recently exploring granting to external projects. There are also a lot of tangential communities to EA like seasteading, charter cities, etc. with their own projects.

Sorted by Click to highlight new comments since: Today at 12:10 PM

I've been thinking that the default existential risk framing might bias EAs to think that the world would eventually end up okay if it weren't for specific future risks (AI, nuclear war, pandemics). This framing downplays the possibility that things are on a bad trajectory by default because sane and compassionate forces ("pockets of sanity") aren't sufficiently in control of history (and perhaps have never been – though some of the successes around early nuclear risk strike me as impressive). 

We can view AI risk as an opportunity to attain control over history because AI, if it's aligned and things go well, could do it better. But how do you get from "not being in control of history" to solving a massive coordination problem (as well as technical problems around alignment)? It seems it's a top priority to grow/expand pockets of sanity. 

(Separate point: My intuition is that "pockets of sanity" are pretty black and white, so that if something isn't a pocket of sanity, marginal improvements to it will have little effect and it's better to focus on supporting (or building anew) something where the team, organization, government branch, etc., already has the type of leadership and culture you want to see more of.) 

Your impression of the default framing aligns with what I've heard from folks! In addition to the benefits of changing humanity's trajectory, there's also an argument that we should pursue systems change for the factors that are driving existential risk in the first place, in addition to addressing it from a research-focused angle. That's the argument of this article on meta existential risk!