Experts currently treat being persuaded as reasonably good evidence that something is true — their judgment is calibrated enough that when they find an argument convincing, that's correlated with the argument actually being correct. This allows them to update readily in light of new evidence, and is a big part of how intellectual progress happens: lots of innovation and advances in basically every subject come down to experts taking sometimes weird new ideas seriously.
One worry I have about superpersuasive AI is that it could erode this. If a superpersuasive AI can convince experts of things regardless of whether those things are true, experts may cease to see themselves being persuaded as good evidence that something is true — and start treating it the way laypeople do. Laypeople are typically hesitant to take on new, truth-tracking beliefs in light of new information, and (to some degree) rationally so: the fact that someone was able to convince a layperson of something is just not very strong evidence that it is in fact true. Experts might end up in the same position — only updating rarely, and in ways that are often unrelated to the truth.
This would be quite bad. If experts lose their capacity to reliably update on genuine evidence, we could significantly slow the rate of intellectual progress (which could be very important for making AI go well!). This is, I think, an underappreciated argument for caring about AI for epistemics — curious what others think.