It now seems very plausible to me that at least one actor is willing and able to create and spread deceptively real deepfakes (e.g. Russia sabotaging the West). Why doesn't this happen all the time? Sure, there are a few here and there, but given that someone should be able and willing to do it much more often, where are the deepfakes? Maybe the problem is that a random video somewhere on the internet doesn't reliably go viral and that videos of politicians are effectively only seen and disseminated when they come from large media platforms? But the effect would have to be really strong to explain why so few deepfakes go viral.
This seems relatively important to me because it's a case of ‘as soon as enough actors have access to potentially dangerous technology X, some bad/careless actor will use it with bad consequences’. This argument is central to many x-risks. With deepfakes, I would actually expect the same, but it doesn't happen as much as I would expect. So there's some flaw in my model of how such apocalyptic residuals work, and I may be thinking about x-risks all wrong.
Fwiw I think the "deepfakes will be a huge deal" stuff has been pretty overhyped and the main reason we haven't seen huge negative impacts is that society already has reasonable defences against fake images that prevent many people from getting misled by them.
I don't think this applies to many other mouse style risks that the AI X-risk community cares about.
For example the main differences in my view between AI-enabled deepfakes and AI-enabled biorisks are:
* marginal people getting access to bioweapons is just a much bigger deal than marginal people being able to make deepfakes
* there is much less room for the price of deepfakes to decrease than the cost of developing a bioweapon (photoshop has existed for a long time and expertise is relatively cheap).