Quick question re. the intersection of EA, AI, and crypto: 

Apart from the concept of an 'AI Oracle' (from Nick Bostrom), has anyone in EA written about a quite different kind of oracle:  oracle protocols (e.g. Chainlink) in the crypto industry? I'm interested in oracle protocols as possible tool for AI systems to get reliably 'aligned' -- not with human values, in this case, but with what's really going on out there in the real world, through highly reliable, cryptographically secure, consensus-based data inputs.

The concepts of 'definitive truth' and 'cryptographic truth' from Chainlink founder Sergey Nazarov seem potentially relevant to helping AI systems get reliable, hard-to-fake, high-security input data from real-world sources. A Lex Fridman podcast interview with Nazarov is here. More on this topic here  and here

As epistemically humble folks, the issue of reliable inputs to AI decision-making seems like something we might be concerned about -- especially given the very high incentives (financial, military, political, etc) for biasing AI decisions through feeding them false, partial, or misleading data.

5 comments, sorted by Click to highlight new comments since: Today at 6:35 AM
New Comment

As someone who previously worked in the blockchain space, I would like to point out that the state-of-the-art in blockchain oracles (including Chainlink) is not significantly better than the following:

"Create a company whose goal is to output honest info, split equity of this company among a small group of trustworthy participants, hope that risk of equity losing value (+ reputation + legal risk) is enough for most of the participants to provide honest data, have the company output some aggregation function of the data provided by all participants. People who want honest data pay for the data, which makes the equity valuable."

Now instead of equity you use a cryptocurrency that plays the same role as equity.

There are other less popular (and arguably less practical) approaches such as SchellingCoin.

 

The real world in general has better mechanisms to ensure trustworthy data is produced. For instance bureaucracies such as those inside of companies like Google make it hard for a small group of people within Google to collude and poison data. Also we have judiciaries who can penalise dishonest actors, and judiciaries have a large number of mechanisms to ensure that the judiciaries themselves are not corrupted.

 

Another issue that is unique to data poisioning attacks for AI, is the high volume of data to be processed. Most of the time, it is not possible for even one human to go through the data, let alone for a large number of people to go through and form consensus. Detecting data poisoning attacks itself can be AI-assisted. If you do have a multiple participants willing to attest to the trutworthiness of a dataset, perhaps you could some oracle-like scheme where the participants all have something at stake if it is later found out that the dataset was not trustworthy. 

 

You may be interested in reading more on data poisioning attacks in general, it is an emerging area.

This is a helpful comment; thanks. 

I'm also somewhat skeptical about whether Chainlink & other oracle protocols can really maximize reliability of data through their economic incentive models, but at least they seem to be taking the game theory issues somewhat seriously. 

But then, I'm also very skeptical about the reliability of a lot of real-world data from institutions that also have incentives to misrepresent, overlook, or censor certain kinds of information. (with Google search results being a prime example)

I take your point about the difficulty of scaling any kind of data reliability checks that rely on a human judgment bottleneck, and the important role that AIs might play in helping with that.

Thanks for the suggestion about looking at data poisoning attacks!

Thanks for the reply!

But then, I'm also very skeptical about the reliability of a lot of real-world data from institutions that also have incentives to misrepresent, overlook, or censor certain kinds of information. (with Google search results being a prime example)

This is fair.

An association of AI orgs across the world (cryptographically?) attesting to datasets may be an improvement.

I'm also very much interested in this topic though I've not done any formal research and haven't come accross any specific writing on the subject in the EA communities, but there are a handful of AI related projects in the crypto world that may be doing something like this, one of the most notable is Fetch.ai (which is more or less a blockchain with AI capabilities) which uses the chainlink oracle to obtain real-world economic data that is used by it's AI algorithm to effect AI-driven processes on their blockchain. More about this here.

Thanks for this suggestion about fetch.ai; I'd vaguely heard of them, but wasn't sure what they were up to. 

I know that SingularityNet (by Ben Goetzel) is building some kind of AI blockchainy thing on the Cardano protocol, but I haven't ever understood quite how it works.