Hide table of contents

I have a question regarding possible donation opportunities in AI. From my understanding research in AI is not underfunded in general and AI safety research is mostly focussed on the long term risks of AI. In that light I am very curious what you think about the following. 

I received a question from someone who is worried about the short term risks coming from AI. His arguments are along the lines of:  We currently observe serious destabilizing of society and democracy caused by social media algorithms. Over the past months a lot has been written about this, e.g. that this causes a further rise of populist parties. These parties are often against extra climate change measures, against effective global cooperation on other pressing problems and are more agressive on international security. In this way polarization through social media algorithms could increase potential short term X-risks like climate change, nuclear war and even biorisks and AI. 

Could you answer the following quesions? 

  • Do you think that these short term risks of AI are somewhat neglected within the EA community? 
  • Are there any concrete charities we deem effective countering these AI risks, e.g. through making citizens more resilient towards misinformation? 
  • What do we think about the widely hailed Center For Humane Technology?

Thank you all for the response!

6

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

My off-the-cuff answers:
--Yes, the EA community neglects these things in the sense that it prioritizes other things. However, I think it is right to do so. It's definitely a very important, tractable, and neglected issue, but not as important or neglected as AI alignment, for example. I am not super confident in this judgment and would be happy to see more discussion/analysis. In fact, I'm currently drafting a post on a related topic (persuasion tools).
--I don't know, but I'd be interested to see research into this question. I've heard of a few charities and activist groups working on this stuff but don't have a good sense of how effective they are.
--I don't know much about them; I saw their film The Social Dilemma and liked it.

Thanks! I would love to see more opinions on your first argument: 

  • Do we believe that there is no significant increase in X-risk? (no scale)
  • Do we believe there is nothing we can do about it (not solvable)
  • Do we believe there are many overfunded parties working on this issue (not neglected).
4
kokotajlod
3y
I can't speak for anyone else, but for me: --Short term AI risks like you mention definitely increase X-risk, because they make it harder to solve AI risk (and other x-risks too, though I think those are less probable) --I currently think there are things we can do about it, but they seem difficult: Figuring out what regulations would be good and then successfully getting them passed, probably against opposition, and definitely against competition from other interest groups with other issues. --It's certainly a neglected issue compared to many hot-button political topics. I would love to see more attention paid to it and more smart people working on it. I just think it's probably not more neglected than AI risk reduction. Basically, I think this stuff is currently at the "There should be a couple EAs seriously investigating this, to see how probable and large the danger is and try to brainstorm tractible solutions."  If you want to be such an EA, I encourage you to do so, and would be happy to read and give comments on drafts, video chat to discuss, etc. If no one else was doing it, I might do it myself even. (Like I said, I am working on a post about persuasion tools, motivated by feeling that someone should be talking about this...) I think probably such an investigation will only confirm my current opinions (yup, we should focus on AI risk reduction directly rather than on raising the sanity waterline via reducing short-term risk) but there's a decent chance that it would chance my mind and make me recommend more people switch from AI risk stuff to this stuff.
2
Jan-Willem
3y
Thanks, great response kokotajlod. Do we have any views if there are already other EAs seriously investigating this, to see how probable and large the danger is and try to brainstorm tractible solutions? At the moment I am quite packed with community building work for EA Netherlands but I would love to be in a smaller group to have some discussions about it. I am relatively new to this forum, what would be the best way to find collaborators for this?
5
kokotajlod
3y
Here are some people you could reach out to: Stefan Schubert (IIRC he is skeptical of this sort of thing, so maybe he'll be a good addition to the conversation) Mojmir Stehlik (He's been thinking about polarization) David Althaus (He's been thinking about forecasting platforms as a potential tractible and scalable intervention to raise the sanity waterline) There are probably a bunch of people who are also worth talking to but these are the ones I know of off the top of my head.
2
Jan-Willem
3y
Great thanks! Did you already listen to https://80000hours.org/podcast/episodes/tristan-harris-changing-incentives-social-media/?  New 80k episode, partially dedicated to this argument.
1
kokotajlod
3y
Not yet, thanks for introducing it to me!
Comments3
Sorted by Click to highlight new comments since: Today at 2:19 PM

A couple of resources that may be of interest here:

- The work of Aviv Ovadya of the Thoughtful Technology Project; don't think he's an EA (he may be, but it hasn't come up in my discussions with him): https://aviv.me/

- CSER's recent report with Alan Turing Institute and DSTL, which isn't specific to AI and social media algorithms only, but addresses these and other issues in crisis response:
"Tackling threats to informed decisionmaking in democratic societies"
https://www.turing.ac.uk/sites/default/files/2020-10/epistemic-security-report_final.pdf

- Recommendations for reducing malicious use of machine learning in synthetic media (Thoughtful Technology Project's Aviv Ovadya and CFI's Jess Whittlestone)
https://arxiv.org/pdf/1907.11274.pdf

- And a short review of some recent research on online targeting harms by CFI researchers

https://www.repository.cam.ac.uk/bitstream/handle/1810/296167/CDEI%20Submission%20on%20Targeting%202019.pdf?sequence=1&isAllowed=y

@Sean_o_h , Just seeing this now when searching for my name on the forum, actually to find a talk I did for an EA community! Thanks for the shoutout. 

For context, while I've not been super active community-wise, and I don't to find identities, EA or otherwise, particularly useful to my work, I definitely e.g fit all the EA definitions as outlined by CEA, use ITN, etc.

Curated and popular this week
Relevant opportunities