Sorted by New

Wiki Contributions


Looking for collaborators after last 80k podcast with Tristan Harris
Answer by avivDec 10, 20205

I was informed of this thread by someone in the EA community who suggested I help. I have deep subject matter expertise in this domain (depending on how you count, I've been working in it full-time for 5 years, and toward it for 10+ years). 

The reason I started working on this could be characterized as resulting from my beliefs on the "threat multiplier" impact that broken information ecosystems have on catastrophic risk.

A few caveats about all this though: 

  1. Most of the public dialogue around these issues is very simplistic and reductionist (which leads to the following two issues...).
  2. The framing of the questions you provide may not be ideal for getting at your underlying goals/questions. I would think more about that.
  3. Much of the academic research is terrible, simply due to the lack of quality data and the newness and interdisciplinary nature of the fields; even "top researchers" sometimes draw the unsubstantiated conclusions from their studies.

All that said, I continue to believe that the set of problems around information systems (and relatedly governance) are a prerequisite for addressing catastrophic global risks—that they are among the most urgent and important issues that we could be addressing—and that we are still heading at a faster and faster rate in the wrong direction.

I have very limited bandwidth, with a number of other projects in the space, but if people are putting put significant money and time toward this, I may be able to put in some time in an advisory role at least to help direct that energy effectively. My contact info and more context about me is at aviv.me

EAGxVirtual Unconference (Saturday, June 20th 2020)

Avoiding Infocalypse: How a decline in epistemic competence makes catastrophic risks inevitable — and what EA's can do about it

This would be shortened and modified version of a talk I gave at Cambridge University, at the Leverhulme Centre for the Future of Intelligence and Centre for the Study of Existential Risk. The general public version of many of the ideas can be found in this TEDx talk that I gave in 2018 (ignore the title, not my choice).

Part 1: Framing the underlying problem

Describe what is meant by epistemic competence (the ability or desire for individuals, organizations, governments, etc. to effectively make sense of the world). Illustrate how it is declining, and how that is likely to get worse.

Part 2: Connect to catastrophic risks
Describe how lower epistemic competence makes it extremely difficult to do any sort of crucial coordination, making global coordination on catastrophic risks increasingly unlikely. In addition, lower epistemic competence makes catastrophic forcing functions more likely and individual mitigation steps less likely.

Part 3: Exploring mitigations

Discuss what can be done, and show that many of these problems are related to other better understood EA cause areas (including e.g. the connection between synthetic media and AGI).


More about my work here for context: aviv.me, twitter.com/metaviv.

I would be interested in a late session. My goal is to more broadly circulate these concerns in the EA community, which I have been adjacent to for many years (e.g. this podcast episode I did with Julia Galef) but never deeply engaged.