I have a question regarding possible donation opportunities in AI. From my understanding research in AI is not underfunded in general and AI safety research is mostly focussed on the long term risks of AI. In that light I am very curious what you think about the following.
I received a question from someone who is worried about the short term risks coming from AI. His arguments are along the lines of: We currently observe serious destabilizing of society and democracy caused by social media algorithms. Over the past months a lot has been written about this, e.g. that this causes a further rise of populist parties. These parties are often against extra climate change measures, against effective global cooperation on other pressing problems and are more agressive on international security. In this way polarization through social media algorithms could increase potential short term X-risks like climate change, nuclear war and even biorisks and AI.
Could you answer the following quesions?
- Do you think that these short term risks of AI are somewhat neglected within the EA community?
- Are there any concrete charities we deem effective countering these AI risks, e.g. through making citizens more resilient towards misinformation?
- What do we think about the widely hailed Center For Humane Technology?
Thank you all for the response!