Hide table of contents

AI could potentially help democracy by being a less biased expert, something people could turn to if they don't trust what is coming out of human experts. An AI could theoretically consume complex legislation, models, prediction market information and data and provide a easily questioned agent that could present graphs and visualisations. This could help voters make more informed choices about important topics.

Can this be done safely and reliably? If so can it help people make better decisions around AI safety?

Is anyone working on this idea currently?

5

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

That's an interesting question, Will. 

I think the main challenge is that being 'less biased' in this case isn't really feasible because the AI would be trained on data and that data is, by its nature, biased. You can certainly use AI to assist the democratic process by, say, checking politician speeches for factual accuracy in real-time but as for itself being an expert in the way you mention here I'm not sure how that could be more feasible than just using a human who has an edge over the AI via a deeper contextual understanding of the world.

For the visualisation that's certainly feasible, but again you run into the problem that you can make statistics say just about anything you want. Stats and data are often very misleading, especially in democratic (and non-democratic I guess) politics and a major problem with AI is that it really struggles to differentiate this due to missing contextual information.

As for specific AI safety decisions, I can only answer from a policy perspective. I will let a technical expert answer that side. For policy/governance, I think many of the important decisions around AI safety along this line require a very human understanding of complex human systems, and so I can see it being useful as a research tool but not much more than that in that respect. 

It's true that all data and algorithms are biased in some way. But I suppose the question is, is the bias from this less than what you get from human experts, who often have a pay cheque that might lead them to think in a certain way.

I'd imagine that any system would not be trusted implicitly, to start with, but would have to build up a reputation of providing useful predictions.

In terms of implementation, I'm imagining people building complex models of the world, like decision making under deep uncertainty with the AI mainly providing a user friendly interface to ask questions about the model.

1
CAISID
2mo
At best I think it would likely be around the same bias as humans, but also potentially much worse. For paycheque influences on human experts, the AI would likely lean the same way as its developer as they tend to heavily maintain developer bias (as the developer is the one measuring success, largely by their own metrics) so there's not much of a difference there in my opinion. I'm not saying the idea is bad, but I'm not sure it provides anything useful to negate its significant downside resource and risk cost except when used as a data collation tool for human experts. You can use built trust, neutrality vetting, and careful implementation with humans too. That said, I'm just one person. A stranger on the internet. There might be people working on this who significantly disagree with me on this. 
Curated and popular this week
Relevant opportunities