I have a question regarding possible donation opportunities in AI. From my understanding research in AI is not underfunded in general and AI safety research is mostly focussed on the long term risks of AI. In that light I am very curious what you think about the following.
I received a question from someone who is worried about the short term risks coming from AI. His arguments are along the lines of: We currently observe serious destabilizing of society and democracy caused by social media algorithms. Over the past months a lot has been written about this, e.g. that this causes a further rise of populist parties. These parties are often against extra climate change measures, against effective global cooperation on other pressing problems and are more agressive on international security. In this way polarization through social media algorithms could increase potential short term X-risks like climate change, nuclear war and even biorisks and AI.
Could you answer the following quesions?
- Do you think that these short term risks of AI are somewhat neglected within the EA community?
- Are there any concrete charities we deem effective countering these AI risks, e.g. through making citizens more resilient towards misinformation?
- What do we think about the widely hailed Center For Humane Technology?
Thank you all for the response!
A couple of resources that may be of interest here:
- The work of Aviv Ovadya of the Thoughtful Technology Project; don't think he's an EA (he may be, but it hasn't come up in my discussions with him): https://aviv.me/
- CSER's recent report with Alan Turing Institute and DSTL, which isn't specific to AI and social media algorithms only, but addresses these and other issues in crisis response:
"Tackling threats to informed decisionmaking in democratic societies"
https://www.turing.ac.uk/sites/default/files/2020-10/epistemic-security-report_final.pdf
- Recommendations for reducing malicious use of machine learning in synthetic media (Thoughtful Technology Project's Aviv Ovadya and CFI's Jess Whittlestone)
https://arxiv.org/pdf/1907.11274.pdf
- And a short review of some recent research on online targeting harms by CFI researchers
https://www.repository.cam.ac.uk/bitstream/handle/1810/296167/CDEI%20Submission%20on%20Targeting%202019.pdf?sequence=1&isAllowed=y
@Sean_o_h , Just seeing this now when searching for my name on the forum, actually to find a talk I did for an EA community! Thanks for the shoutout.
For context, while I've not been super active community-wise, and I don't to find identities, EA or otherwise, particularly useful to my work, I definitely e.g fit all the EA definitions as outlined by CEA, use ITN, etc.