Hide table of contents

I am currently engaging more with the content produced by Daniel Schmachtenberger and the Consilience Project and slightly wondering why the EA community is not really engaging with this kind of work focused on the metacrisis, which is a term that alludes to the overlapping and interconnected nature of the multiple global crises that our nascent planetary culture faces. The core proposition is that we cannot get to a resilient civilization if we do not understand and address the underlying drivers that lead to global crises emerging in the first place. This work is overtly focused on addressing existential risk and Daniel Schmachtenberger has become quite a popular figure in the youtube and podcast sphere (e.g., see him speak at Norrsken). Thus, I am sure people should have come across this work. Still, I find basically no or only marginally related discussion of this work in this forum (see results of some searches below), which surprises me.

What is your best explanation of why this is the case? Are the arguments so flawed that it is not worth engaging with this content? Do we expect "them" to come to "us" before we engage with the content openly? Does the content not resonate well enough with the "techno utopian approach" that some say is the EA mainstream way of thinking and, thus, other perspectives are simply neglected? Or am I simply the first to notice, be confused, and care enough about this to start investigate this?

Bonus Question: Do you think that we should engage more with the ongoing work around the metacrisis?

Related content in the EA forum

51

0
0

Reactions

0
0
New Answer
New Comment

11 Answers sorted by

I watched most of a youtube video on this topic to see what it's about. 

I think I agree that "coordination problems are the biggest issue that's facing us" is an underrated perspective. I see it as a reason for less optimism about the future.

The term "crisis" (in "metacrisis") makes it sound like it's something new and acute, but it seems that we've had coordination problems for all of history. Though maybe their effects are getting worse because of accelerating technological progress?

In any case, in the video I watched, Schmachtenberger mentioned the saying, "If you understand a problem, you're halfway there toward solving it." (Not sure that was the exact wording, but something like that.) Unfortunately, I don't think the saying holds here. I feel quite pessimistic about changing the dynamics about why earth is so unlike Yudkwosky's "dath ilan." Maybe I stopped the Schmachtenberger video before he got to the solution proposals (but I feel like if he had great solution proposals, he should lead with those).  In my view, the catch-22 is that you need well-functioning (and sane and compassionate) groups/companies/institutions/government branches to "reform" anything, which is challenging when your problem is that groups/companies/institutions/government branches don't work well (or aren't sane or compassionate).

I didn't watch the entire video by Schmachtenberger, but I got a sense that he thinks something like, "If we can change societal incentives, we can address the metacrisis." Unfortunately, I think this is extremely hard – it's swimming upstream, and even if we were able to change some societal incentives, they'd at best go from "vastly suboptimal" to "still pretty suboptimal." (I think it would require god-like technology to create anything close to optimal social incentives.) 

Of course, that doesn't mean making things better is not worth trying. If I had longer AI timelines, I would probably think of this as the top priority. (Accordingly, I think it's weird that this isn't on the radar of more EAs, since many EAs have longer timelines than me?) 

My approach is mostly taking for granted that large parts of the world are broken, so I recommend working with the groups/companies/institutions/government branches that still function, expanding existing pockets of sanity and creating new ones.

Of course, if someone had an idea for changing the way people consume news, or making a better version of social media, trying to create more of a shared reality and shared priority about what matters in the world, improving public discourse, I'd be like "this is very much worth trying!." But it seems challenging to compete for attention against clickbait and outrage amplification machinery.

EA already has the cause area "improving institutional decision-making." I think things like approval voting are cool and I like forecasting just like many EAs, but I'd probably place more of a focus on "expanding pockets of sanity" or "building new pockets of sanity from scratch." "Improving" suggests that things are gradual. My cognitive style might be biased towards black-and-white thinking, but to me it really feels like a lot of institutions/groups/companies/government branches mostly fall into two types, "dysfunctional" and "please give us more of that." It's pointless to try to improve the ones with dysfunctional leadership or culture (instead, those have to be reformed or you have to work without them). Focus on what works and create more of it.

The term "crisis" (in "metacrisis") makes it sound like it's something new and acute, but it seems that we've had coordination problems for all of history. Though maybe their effects are getting worse because of accelerating technological progress?

 

This would be surprising to me, since so much of tech progress is the creation of social coordination technologies (internet and social media platforms, cell phones and computers, new modes of transport, cheaper food and safer water that simplifies logistics of human movement, new institutions, new language... (read more)

Thank you for engaging with the content in a meaningful way and also taking the time to write up your experience. This answer was particularly helpful for me to get the sense that a) there is a productive way that more discussion can be had on this topic and b) some ideas for how this might be framed. So thank you very much!

I intensively skimmed the first suggested article "Technology is not Values Neutral. Ending the reign of nihilistic design", and found the analysis mostly lucid and free of political buzzwords. There's definitely a lot worth engaging with there. Similarly to what you write however, I got a sense of unjustified optimism in the proposed solution, which centers around analyzing second and third order effects of technology during their development. Unfortunately, the article does not appear to acknowledge that predicting such societal effects seems really hard... (read more)

I think there's a good chance you're the first to really look into this. If you did a well written review and evaluation of the work, I'm sure people would read it.

My uninformed prior is skeptical. The concept of a metacrisis seems pretty sus to me

EDIT: I'm replying to this comment many months later. Metacrisis is relatively new, back in January there were not that much written resources. The concept is / was relatively new.

•••••

(from the perspective of time) there is enough material about metacrisis / polycrisis / everything crisis, there is no need for yet another sythesis.

The diagram below comes from World Economic Forum The Global Risks Report 2023

Direct link: https://www3.weforum.org/docs/WEF_Global_Risks_Report_2023.pdf

Davos interconnected risk

Worth noting that "metacrisis" and "polycrisis" are pretty much the same te... (read more)

CG
1y-1
0
0

Can you unpack a little bit of your individual impression of the metacrisis? 
 

I've been trying to pin down this disconnect for about 6 months within the metacrisis space. Not sure how much OP has looked into it but I'm quite interested get a broader understanding of EA's take on it.

This is my domain, and has been for over a decade. If you (or anyone) wants to talk about this stuff feel free to reach out.

I think a big reason for the lack of crossover between the metacrisis (which is mostly a rebranding of collapse) crowd and EA is that the former culture is strongly pessimistic and the latter is strongly optimistic - so both cultures have a tendency to simply dismiss the other[1].

I think more integration between these cultures and domains is super important; that's why I'm here :)

It's especially important when you consider the Hinge of History in light of the metacrisis - you realize that calling it 'the most important century' is being quite optimistic about how long the window we have to make a difference actually is[2] :)

 

  1. ^

    This isn't a criticism, it's just how our brains work. Pessimism vs Optimism is a very fundamental dichotomy and it shapes practically everything else about your worldview.

  2. ^

    In the looooong process of putting together a post on this. No ETA.

Thanks! I am quite happy with the resonance the questions got, so I am considering writing a more comprehensive post on this topic in the future. It would be great to connect at some point and see if there are ways to push this forward together.

Where can we find a solid written introduction to the concept of "meta-crisis?" I haven't watched any of the Youtube videos and probably won't even if they're good, but I'd be interested in a written resource if available.

1
Stavros
1y
  from https://cusp.ac.uk/themes/m/blog-jr-meta-crisis/ I'm not overly keen on the terms meta/polycrisis. But 'collapse' aint great either. At any rate, they're all gesturing in the same general direction: civilization is a complex system composed of, reliant on, interacting with other complex systems. Many of these systems are out of equilibrium, many are under stress or degrading. Problems in one system, e.g. energy, ripple out and have all kinds of chaotic effects in other systems, some of which can feed back into the energy system. And this is roughly where the optimists and the pessimists go their separate ways - it usually requires a pessimistic disposition to go around finding and connecting all these horrifying little dots and perceive the 'metacrisis', and the foregone conclusion to a pessimist is that we're all doomed :) This conclusion is anathema to optimists, so the baby tends to get thrown out with the bathwater. The reason the metacrisis is a valuable framework, to EA most of all, is that it's a powerfully predictive model of the world - by revealing the interconnected clusterfuck of everything, it also highlights areas where successful intervention would have massive, systemwide, effects. (And it shows you where interventions that might seem effective in a vacuum, are in effect meaningless.) To give you a solid example of the kind of thing I'm talking about: trust is a cause area that is almost totally neglected, yet is actually a bottleneck in almost every single other cause area - inequality, nuclear proliferation, climate change, AI safety etc etc - you find/make a tool for scaling trust, you'll basically hit the jackpot in terms of EFFECTIVE altruism. One last point in favour of the metacrisis framework: it gives you realistic timelines. I referenced this earlier, but this really is the point I can't hammer home enough: The hinge of history is shorter than EA's think. I genuinely believe that this community/movement is the best candid
2
DirectedEvolution
1y
Based on the article you linked, it seems like 'meta-crisis' thinking employs a bundle of concepts that LessWrong often calls 'Moloch' or 'simulacra levels' or 'inadequate equilibria' or simply tradeoffs. This line of analysis attempts to use these ideas to explain failures of collective action to implement complex institutional change, and generate solutions to overcome this inertia. I'm sympathetic to the need to address issues of governance and collective action. However, what interests me are clear problem-solution pairs with good evidence, a straightforward mechanism, and adequate information feedback to see if it's working. "We should switch to approval voting" meets those criteria. I'm less excited about interrogating "the very idea of ‘the economy’ or what exactly we mean by ‘money’" or the idea that "too much liberty may kill liberalism, too much voting can weaken democracies, and we don’t always understand how we understand, we tend to deny our denial, and we are struggling to imagine a new imaginary." 
2
Stavros
1y
You're broadly correct that the metacrisis is in the same neighbourhood as stuff like Moloch and inadequate equilibria. I definitely wouldn't say that the metacrisis is a 'governance/collective action' issue, although that's certainly an important piece of the problem. I too like simple solutions with clear feeback. Who doesn't? If only the world were so cooperative. But... this strategy results in things like rearranging deckchairs on the Titanic; you're almost guaranteed to miss the forest for the trees. This is exactly why I want to bring these two cultures closer together: EAs have an incredible capacity for action and problem solving, more than any other community I've seen. buuut that capacity needs to be informed by a deep macro understanding of the world, such as those who study the metacrisis possess. Otherwise - deckchairs, titanic. And, as an aside to your aside, while you're less excited about "what we mean by 'money'", I'd point out that people not knowing[1] the answer to that question has resulted in a great deal of destruction and inefficiency.   1. ^ Both in the sense that 'normal' people vote for nonsensical monetary policies, and decision makers propose and enact nonsensical monetary policies.
2
DirectedEvolution
1y
So I have a lot of questions. I'll try to ask them one or two at a time. It seems like you're claiming something like this: "Clearly, there are a bunch of emergencies, which have causes and solutions. For a lot of them, we know what the causes and solutions are, but don't implement them. That is probably because our global institutions have big complicated effects on each other, but nobody has a very good predictive model of what the chain of cause and effect is or how to intervene in it productively. Like, if you wanted to pass a carbon tax, what would you even do? Probably, if we studied that, we could figure out some sort of complicated way to make all these global institutions fit together way better, so that we'd be just a lot happier with life. It's sort of like we have a medieval doctor's understanding of how the body works, and we'd be a lot better of stopping trying to 'treat' most problems and start trying to study how the body works in detail. Except instead of the body, it's 'global institutions and culture' and instead of medicine it's all sorts of political/cultural/economic/scientific interventions." Is that roughly what you mean?
2
Stavros
1y
Mmmm, I'll try my best to deconfuse. Clearly, there are a bunch of emergencies. * Some of these emergencies are orders of magnitude more important or urgent than others. * My first claim is that scale and context matter * e.g. an intervention in cause area X may be obvious and effective when evaluated in isolation, but in context the lives saved from X are then lost to cause Y instead. * My second claim is that many of these emergencies are not discrete problems. * Rather they are complex interdependent systems in a state of extreme imbalance, stress or degradation - e.g. climate, ecology, demography, economy  * My third claim is that, yes, governance is a more-or-less universal bottleneck in our ability to engage with these emergencies. * But, my fourth claim is that this doesn't make all of the above a governance problem. Solutions to governance do not solve these emergencies, they simply improve our ability to engage with these emergencies. * If you really really want somewhere specific to point the finger, it's homo sapiens. There's a great quote: "We Have Stone Age Emotions, Medieval Institutions and God-Like Technology" - E. O. Wilson Practically, my position, informed by the metacrisis, is that: We have less time to make a difference than is commonly believed in EA circles, and the difference we have to make has to be systemic and paradigm changing - saving lives doesn't matter if the life support system itself is failing. Thus interventions which aren't directly or indirectly targeting the life support system itself can seem incredibly effective while actually being a textbook case of rearranging the deckchairs on the Titanic. P.S. Thanks for your time and patience in engaging with me on this topic and encouraging me to clarify in this manner.
2
DirectedEvolution
1y
How much time do you think we have? My impression is that a lot of EAs at least are operating with a sense of extreme urgency over their AI timelines and expectations of risk (i.e. 10 years, 99% chance of doom). It would be informative to give a numeric estimate of X years until Y consequence, accepting that it's imprecise. So it sounds like you are an X-risk guy, which is a very mainstream EA position. Although I'm not sure if you're a "last 1%-er," as in weighing the complete loss of human life much more heavily than losing say 99% of human life. But it sounds like your main contention is that weird complicated environmental/economic/population interactions that are very hard to see directly will somehow lead to doom if not corrected. Overall there's a motte here, which is "not all interventions help solve the problem you really care about, sometimes for complicated reasons." I'm just not sure what the big insight is about what to do, given that fact, that we're not already doing.
1
Stavros
1y
95% certainty <100 years, 80% certainty <50 years, 50% certainty, <30 years... But the question is 'how much time do we have until X?' and for that... This is where I diverge heavily, and where the metacrisis framework comes into play: I am a civilization x-risk guy, not a homo sapiens x-risk guy. My timeline is specifically 'how much time do we have until irreversible, permanent, loss of civilizational capacity[1]' Whether humans survive is irrelevant to me[2]. What seems clear to me is that we are faced with a choice between two paradigm shifts: one in which we grow beyond our current limitations as a species, and one in which we are forever confined to them. Technology is the deciding factor, to quote Homer Simpson - 'the cause of, and solution to, all of life's problems' :p And achieving our current technological capacity is not repeatable. The idea that future humans can rebuild is incredibly naive yet rarely questioned in EA[3]. If you accept that proposition, even if just for the sake of argument, then my emphasis on the hinge of history should make sense. This is our one chance to build a better future, if we fail then none of the futures we can expect are ones any of us would want to live in. And this is where the insight of the metacrisis is relevant: interventions focused on the survival/flourishing of civilization itself are, from my[4] point of view, the only ones with positive EV. What to do that we're not already doing: Increased focus/prioritization of: * Governance (both working with existing decision making structures, and enabling the creation and growth of new ones) * Social empowerment/'uplift' (thinking specifically of things like Taiwanese Digital Democracy) * Economic Innovation - the fact that we are in a situation where we are reliant on the philanthropy of billionaires is conclusive evidence that the current system is well overdue for an overhaul. * Resilience (really broad category) * The former three points are critical
4
DirectedEvolution
1y
Just a note on your communication style, at least on EA forum I think it would help if you replaced more of your "deckchairs on the Titanic" and "forest for the trees" metaphores with specific examples, even hypothetical. For example, when you say "I would say all of these areas are either underprioritized or, as in the case of global health, often missing the forest for the trees (literally - saving trees without doing anything about the existential threat to the forest itself)," I actually don't know what you mean. What are the forest and what are trees in this example? Like, you say "literally saving trees," but unless you for some reason consider forest preservation to fall under the umbrella of global health, it's not literally saving trees. Anyway, I think I see a little more where you're coming from, let me know if I'm misunderstanding. * You start by assuming that a civilizational collapse would be irrecoverable, and just about as bad as human extinction. * Given that assumption, you see a lot of bad stuff that could wipe out civilization without necessarily killing everybody, like a global food supply disaster, a pandemic, a war, climate change, energy production problems, etc. * Since all these potential sources of collapse seem just as bad as human extinction, you think it'x worth putting effort into all of them. * EA often prioritizes protecting human/sentient life directly, but doesn't focus that hard on things like evaluating risks to the global energy supply except insofar as those risks stem from problems that might also just kill a lot of people, like a pandemic or ΑΙ run amok. Overall, it seems like you think there's a lot more sources of fragility than EA takes into account, lots of ways civilization could collapse, and EA's only looking at a few. Is that roughly where you're coming from?
2
Stavros
1y
  Yeah that's a good summary of my position. Thanks, will keep this in mind. It's been an active (and still ongoing) effort to adjust my style toward EA norms.
2
DirectedEvolution
1y
Do you think civilization generally is fragile? Or just like post industrial civilization? We have seen the collapse and reconstruction of civilization to varying degrees across history. But we’ve never seen the collapse and revitalization of an industrial society. Is it specifically that you think we’ve used up some key inputs to starting an industrial civ, like maybe easily accessible coal and oil reserves or something?
1
Stavros
1y
Ah, I want to acknowledge that the definition of civilization is quite broad without getting too in the weeds on this point. I heard the economist Steve Keen describe civilization as 'harnessing energy to elevate us above the base level of the planet' (I may be paraphrasing somewhat). I think this is a pretty good definition, because it also makes it clear why civilization is inherently unstable - and thus fragile - it is, by definition, out of equilibrium with the natural environment. And any ecologist will know what happens next in this situation - overshoot[1]. So all civilization is inherently fragile, and the larger it grows the more it depletes the carrying capacity of the environment. Which brings us to industrial/post industrial civilization: I think the best metaphor for industrial civilization is a rocket - it's an incredibly powerful channeled explosion that has the potential to take you to space, but also has the potential to explode, and has a finite quantity of fuel. The 'fuel', in the case of industrial civilization is not simply material resources such as oil and coal, but also environmental resources - the complex ecologies that support life on the planet and even the stable, temperate, climate that gave us the opportunity to settle down and form civilization. Civilization can only form during these tiny little peaks, the interglacial periods. Anthropogenic climate change is far beyond the bounds of this cycle and there is no guarantee that it will return to a cadence capable of supporting future civilizations.  Further, our current level of development was the result of a complex chain of geopolitical events that resulted in a prolonged period of global stability and prosperity. While it may be possible for future civilizations to achieve some level of technological development, it is incredibly unlikely they will ever have the resources and conditions that enabled us to reach the 'digital' tech level. Consider that even now, under far
1
Stavros
1y
Replying to myself with an additional contribution I just read that says everything much better than I managed: * Gail Tverberg I would add that while new structures can be expected to form, because they are adapted for different conditions and exploiting different energy gradients, we should not expect them to have the same features/levels of complexity.
2
DirectedEvolution
1y
This is highly relevant to your interest in scaling trust: https://www.lesswrong.com/posts/Fu7bqAyCMjfcMzBah/eigenkarma-trust-at-scale
3
Stavros
1y
Yeah :) I'm actually already trying to contribute to that project. Thanks for thinking of me when you saw something relevant though.

I do not see metacrisis as pessimistic.

I see metacrisis as accurately describing the state of the current affairs.

There are so many recent events that gave me hope:

  • Extinction Rebellion, global decentralized movement
  • COVID, radical change is possible
  • Elon Musk buying Twitter, freedom of speech, global town hall
  • Perennial rice
  • Nuclear fusion
  • Patent US4394230A for splitting water molecules into hydrogen (it's about changing the structure of water, 1 unit of energy in, more than 1 units of energy out)
  • LK99 superconductor (debunked but surely it will inspire next wav
... (read more)
CG
1y1
0
0

Edit: I initially wrote a more detailed response that I accidentally posted prematurely. But this pessimism vs optimism explanation is quite interesting and after trying to hastily revise my comment, I think I'll have to reflect on it a bit more and likely pick your brain if you don't mind later.

[This comment is no longer endorsed by its author]Reply
1
Stavros
1y
Sure :) It's definitely an overly simplistic explanation, but as someone at the intersection between these two cultures it seems useful. As I references in the footnote, it's not a new thing at all. There's a fascinating theory (although I can't remember who to credit or what to call it) that you can't evolve self-awareness without the optimism bias because you'd end up with crippling depression/anxiety and not get around to spreading your genes. But if you evolve the optimism bias without self-awareness you're likely to get eaten :D There's also a whole school of Pessimist philosophy you might find interesting (or if you're an optimist, disturbing).

I tried googling it, and just found a few YouTube videos, the Consilience Project, and a few blog posts, most of which are not on “the meta crisis” but on specific issues. The lack of written content and in particular ways to define and interrogate the concept of “meta crisis” makes this feel not very compelling, but I might be missing a treasure trove somewhere!

CG
1y1
0
0

I found Jonathan Rowson's essay Tasting the Pickle to be a helpful introduction for understanding the metacrisis and the worldview. I also strongly recommend checking out this list of podcast episode  transcripts on the metacrisis from the Jim Rutt Show regarding the the metacrisis. 

Hope that helps and would be curious to hear your thoughts if you check them out!

I wrote about Game B last year. Game B is kind of adjacent to Schmachtenberger’s ideas, and I cite him a fair bit. Quoting my summary:

I describe Game B, a worldview and community that aims to forge a new and better kind of society. It calls the status quo Game A and what comes after Game B. Game A is the activity we’ve been engaged in at least since the dawn of civilisation, a Molochian competition over resources. Game B is a new equilibrium, a new kind of society that’s not plagued by collective action problems.

While I agree that collective action problems (broadly construed) are crucial in any model of catastrophic risk, I think that

  1. civilisations like our current one are not inherently self-terminating[1] (75% confidence);
  2. there are already many resources allocated to solving collective action problems (85% confidence); and
  3. Game B is unnecessarily vague (90% confidence) and suffers from a lack of tangible feedback loops (85% confidence).

It seems like this post was on your personal blog but not link-posted to the EA forum. It might make sense to consider doing that in the future for topics that are potentially EA relevant so that we can all get a quick sense of what the community is thinking about these topics.

I'm not aware of any thorough investigations of the metacrisis / polycrisis which come from the perspective of trying to work out how our interventions to solve the metacrisis / polycrisis might need to differ from our approach to individual existential risks. 

I think this kind of investigation could be valuable. I expect that same existential risks are more likely to set off a cascade of existential risks than others, which would have important implications for how we allocate resources for x-risk prevention.

I've only just come across this post, but wanted to say that I've been following and to some extent involved in the sense-making / metamodern space for a few years now, and I too have been wondering why there appears to be so little engagement with that work from the EA side. 

Either way I've also been thinking of writing a post about it at some point, and it's encouraging to see that there are at least some people who are interested in this stuff!

Hey, do I understand correctly that you're pointing out a problem like "there are lots of problems that will eventually lead to x-risk" + "that's bad" + "these problems somewhat feed into each other" ?

If so, speaking only for myself and not for the entire community or anything like that:

  1. I agree
  2. I personally think that AI risks will simply arrive earlier. If I change my mind and think AI risks will arrive after some of the other risks, I'll probably change what I'm working on.

Again, I speak only for myself.

(I'll also go over some of your materials, I'm happy to hear someone made a serious review of it, I'm interested)

I think the point of the metacrisis is to look at the underlying drivers of global catastrophic risks that are mostly various forms of coordination problems related to the management of exponential technologies (e.g., AI, Biotech, and to some degree fossil fuel engines, etc.) and try to address them directly rather than try to solve each issue separately. In particular, there is a worry that solving such issues separately involves building surveillance and control powers to manage the exponential tech which then leads to dystopic outcomes because more cen... (read more)

1️⃣ Timing

You've asked this question 29th Jan.

This video dropped 31th Jan: https://www.youtube.com/watch?v=hv_xBK_XZjw

I joined the Metacrisis working group in March... It takes a while for meme / term / awareness to spread.

Today is 16th Sep and I see massive uptick in awareness.

2️⃣ Metrics

EA and OpenPhilanthropy and GiveWell seem to be operating using https://en.wikipedia.org/wiki/Disability-adjusted_life_year

A lot of metacrisis-related activities do not have clearly defined metrics.

Example of a project I'm personally involved: https://tellthetruth.media/ - I want media to tell the truth. Information, not entertainment. But I genuinely do not know how to measure it.

Same with: https://planetarycouncil.org

Planetary Council

I think that I've figured out a recipe, "great reset but on our terms", an agreeable plan how to change the world, absolutely no controversy in any of these points. It starts on top: "education, sensemaking, unifying narrative and media telling the truth".  Again, no clearly defined metrics.

If I may - honest, authentic, genuine opinion - UNIFYING NARRATIVE is absolutely essential, that's why EDUCATION and SENSEMAKING. You can see these as "trifecta", one cannot exist without another, education without sensemaking is propaganda. Unifying narrative because we need to solve "Moloch" and coordination failure. 

because people have absolutely no idea that they are the cause of the suffering of others they're basically caught in their own mind like neo was stuck in The matrix people are prisoners in their own minds and don't even realize that they are

Thank you for sharing this! I've been casually following the Game B/metacrisis for only about 3ish years and after posing this question to the main Game B forums, I didn't get much of a response.

Does the content not resonate well enough with the "techno utopian approach" that some say is the EA mainstream way of thinking and, thus, other perspectives are simply neglected

I'm unfortunately fairly confident that this may be part of the answer, and that this EA criticism (particularly the sections on complex adaptive systems, excessive quantitative reasoning,  vulnerability/resilience approaches, etc.) outlines some of the conclusions I've independently come to over the past 6 months. 

Do you think that we should engage more with the ongoing work around the metacrisis?

I'd love to see this. I joined an EA co-working space over the summer where I asked about this disconnect. Only 1-2 people had heard of it but I found the feedback from those previously unfamiliar with the metacrisis somewhat promising.

My public notebook is currently down for maintenance, but I hope to share more of my investigations later.

Hey Chris,

I am encouraged by the resonance of my question here and think it is worthwhile to try to continue this conversation. I think I would want to work on a longer blog post in the future. Maybe let's connect around that and see if we can open up the doors for more conversations.

-2
CG
1y
Sounds great to me, let's talk!
Comments4
Sorted by Click to highlight new comments since: Today at 4:51 AM

I came to this post by searching for "Metacrisis".

I genuinely believe that Metacrisis is the underlying mechanism / generator function / incentive (or pervert incentive) affecting loads of existential / catasthropic risks.

A new video just dropped: 

The talk literally has "global catastrophic risks" on the title slide.

I think that EA (Give Well, Open Philanthropy) focus too much on one metric such as DALY, without appreciating the interconnectedness and the fact that many things are difficult to measure using a single metric.

There's no agreement that there is a meta-crisis. Yes, there are multiple sources of danger, and they can interact synergistically and strongly (or so I believe), but that's not the same as saying that there must be root causes for those (global, existential) dangers that humanity can address.

If you asked a different question, like: "What are the underlying drivers of the multiple anthropogenic existential threats that we all face, like nuclear war, engineered pandemics, climate destruction, etc?"

You could get some interesting answers from people who think in those terms. I'm curious what others here think.

But the answer to why underlying drivers are not addressed is easy to deduce: lack of belief, interest, or education in the matter.

This question (Jan 29), your comment (Feb 4)... I think many things changed now (Sep 16)

I think there is much more written material and much more understanding about the metacrisis.

It is clear to me that it exists.

I think that your approach of enumerating the factors "underlying drivers of the multiple anthropogenic existential threats" does not give the justive. The whole concept of metacrisis is that they are interconnected and need to be adressed as whole.

Thanks for all the answers so far! Collectively they were really helpful to get a sense of how this discussion could be framed in a productive way. I am quite looking forward to push this conversation further, I think there is much to be gained here for all perspectives involved.

Curated and popular this week
Relevant opportunities