Hide table of contents

Dear EAs,

After the interview with Tristan Harris Rob asked EAs to do their own research on the topic of aligning recommender systems / short term AI risks. A few weeks ago I already posed the short question on donating against these risks. After that I listened to the 2.5 hour podcast on the topic and read multiple articles on this topic on this forum (https://forum.effectivealtruism.org/posts/E4gfMSqmznDwMrv9q/are-social-media-algorithms-an-existential-risk, https://forum.effectivealtruism.org/posts/xzjQvqDYahigHcwgQ/aligning-recommender-systems-as-cause-area, https://forum.effectivealtruism.org/posts/ptrY5McTdQfDy8o23/short-term-ai-alignment-as-a-priority-cause)

I would love to collaborate with some others on collecting more in-depth arguments including some back- of-the-envelope calculations / Fermi estimates on the possible scale of the problem. I've spent some hours creating a structure including some next steps. In particular we should focus on the first argument since I feel the mental health one is very shaky after recent research.

I didn’t start reading the mentioned papers (except for the abstracts). We can define multiple work streams and divide them between possible collaborators. My expertise is mostly in back- of-the-envelope calculations / Fermi estimates given my background as management consutant. I am especially looking for people who like to / are good at assessing the value of scientific papers. 

Please drop a message below or send an e-mail to jan-willem@effectiefaltruisme.nl if you want to participate

Arguments in favour of aligning recommender systems as cause area

Social media causes political polarization 

(http://eprints.lse.ac.uk/87402/1/Social-Media-Political-Polarization-and-Political-Disinformation-Literature-Review.pdf)

  • Polarization causes less international collaboration
    • Proved by paper?
    • This increases risks for other (X-)risks:
      • Extreme Climate change scenarios
        • Estimate Trump’s impact on climate (What is the chance that an event like this causes certain amplifiers of climate change to push us towards extreme scenarios?)
          • Direct
          • Indirect through other countries doing less
        • Show that more instead of less international collaboration is needed if you want to decrease the probability of extreme scenarios
      • Nuclear war (increasing Sino-American tensions)
      • AI safety risks from misuse (through Sino-American tensions)
      • (Engineered) Pandemics
        • Can we already calculate extra deaths caused through misinformation?
        • Increased chances of biowarfare
      • Lower economic growth because of trade barriers / protectionist measures
        • Calculate extra economic growth from EU to calculate what it will cost is the EU falls apart?
          • Convert to possible additional QALY’s?

Next steps here:

  • Find papers for all the relevant claims
  • Look at Stefan_Schuberts counterarguments below (https://forum.effectivealtruism.org/posts/E4gfMSqmznDwMrv9q/are-social-media-algorithms-an-existential-risk)
  • Synthesize findings from papers that prove political polarisation
  • Show papers that prove that political polarisation led to less international cooperation
  • Look for cases (Trump / Brexit) where social media are to be blamed. Calculate the chance that social media actually flipped the election there
  • Make back- of-the-envelope calculations / Fermi estimates for all relevant negative consequences

 

Social media causes declining mental health

This one looks interesting as well: https://docs.google.com/document/d/1w-HOfseF2wF9YIpXwUUtP65-olnkPyWcgF5BiAtBEy0/edit#

Next steps here:

 

Aligning recommender systems is “training practice” for larger AI alignment problems

See https://forum.effectivealtruism.org/posts/xzjQvqDYahigHcwgQ/aligning-recommender-systems-as-cause-area#Connection_with_AGI_Alignment

Next step here:

  • Should we expand this argument?

 

The problem is solvable (but we need more capacity for research)

See e.g. https://www.turing.ac.uk/sites/default/files/2020-10/epistemic-security-report_final.pdf

Next step here:

  • Collect additional papers on solutions
  • What kind of research is interesting and would be worthwhile investing in?
New Answer
New Comment

1 Answers sorted by

I was informed of this thread by someone in the EA community who suggested I help. I have deep subject matter expertise in this domain (depending on how you count, I've been working in it full-time for 5 years, and toward it for 10+ years). 

The reason I started working on this could be characterized as resulting from my beliefs on the "threat multiplier" impact that broken information ecosystems have on catastrophic risk.

A few caveats about all this though: 

  1. Most of the public dialogue around these issues is very simplistic and reductionist (which leads to the following two issues...).
  2. The framing of the questions you provide may not be ideal for getting at your underlying goals/questions. I would think more about that.
  3. Much of the academic research is terrible, simply due to the lack of quality data and the newness and interdisciplinary nature of the fields; even "top researchers" sometimes draw the unsubstantiated conclusions from their studies.

All that said, I continue to believe that the set of problems around information systems (and relatedly governance) are a prerequisite for addressing catastrophic global risks—that they are among the most urgent and important issues that we could be addressing—and that we are still heading at a faster and faster rate in the wrong direction.

I have very limited bandwidth, with a number of other projects in the space, but if people are putting put significant money and time toward this, I may be able to put in some time in an advisory role at least to help direct that energy effectively. My contact info and more context about me is at aviv.me

Thanks for this! I've sent you an email. Especially regarding caveat #2 I believe you can help with relative little time and resources. 

Comments5
Sorted by Click to highlight new comments since: Today at 8:21 AM

Amazing initiative! I love that you took time to set up clear next steps and asks from the community. Good luck!

Thank you for doing this! It's important to know what the science says about these issues because it helps public interest technologists who want to address these problems avoid wasting effort. I don't have the bandwidth to work on this right now, but I wish you the best of luck!

Here's a 2017 review paper about the impact of digital technologies on children's well-being.

I recommend this paper by the Happiness Research Institute (a Copenhagen-based think tank, not to be confused with the Happier Lives Institute):

#SortingOutSocialMedia: Does social media really pose a threat to young people's well-being?

They found that young people’s online and offline lives are inextricably linked, and that it is necessary to consider which platforms young people use, how they use them, and which personal characteristics make some young people more vulnerable than others online. 

Thanks Barry, it would be great to have someone on the team who is able to give a verdict on the mental health / social media influence. Let me know if you have someone on your mind. I think working on that question should be a seperate work stream in this (small) GPR project.

I don't have a specific person in mind I'm afraid but you could post in the Effective Altruism, Mental Health, and Happiness Facebook group and see if anyone there would like to get involved.

Curated and popular this week
Relevant opportunities