Industries everywhere are asking, “What can AI do for us?”
But the blockchain industry, known for its challenging standards, is also asking the opposite question: “What can blockchain do for AI?”
While there are some compelling answers, three narratives have emerged around this question that are often misleading and, in one case, possibly even dangerous.
Story #1: Blockchain can combat misinformation caused by generative AI
A panel of experts at a recent Coinbase event concluded that “blockchain can counter disinformation with cryptographic digital signatures and timestamps, making it clear what is authentic and what has been manipulated.”
This only applies in a very narrow sense.
Blockchains can do that file creation of digital media in a tamper-proof manner, i.e. so that modification of specific images is detectable. But this is far from clarifying authenticity.
Consider a photo of a flying saucer hovering over the Washington Monument. Suppose someone recorded its creation in, say, block 20,000,000 of the Ethereum blockchain. This fact tells you one thing: the flying saucer image was taken before block 20,000,000. Furthermore, whoever posted the image on the blockchain – let’s call her Alice – did so by digitally signing a transaction. Assuming Alice’s signature key isn’t stolen, it’s clear that Alice registered the photo on the blockchain.
However, you say none of this How the image was created. It could be a photo that Alice took with her own camera. Or Alice might have gotten the image from Bob, who Photoshopped it. Or maybe Carol created it with a generative AI tool. In short, the blockchain doesn’t tell you anything about whether aliens are touring Washington DC, unless you trust Alice to begin with.
Some cameras can digitally sign photos to authenticate them (assuming their sensors can’t be fooled, which is a big ask), but this isn’t blockchain technology.
Story #2: Blockchain can add privacy to AI
Model training is a node = “[object Object]” target=”_blank” rel=”nofollow”>trumpet blockchain technologies as a solution.
However, blockchains are designed for this transparency – a property that is at odds with confidentiality.
Advocates point to privacy-enhancing technologies developed by the blockchain industry to address this tension – particularly “zero-knowledge proofs.” However, zero-knowledge proofs do not solve the privacy problem in AI model training. That’s because a zero-knowledge proof hides no secrets from the person constructing the proof. Zero-knowledge proofs are useful when I want to hide something Mine transaction data from you. But they don’t make it possible me to calculate privately your facts.
There are other, more relevant cryptographic and security tools with esoteric names, including fully homomorphic encryption (FHE), secure multiparty computation (MPC), and secure enclaves. These could in principle support privacy-preserving AI (particularly ‘federated learning’). However, they all have important caveats. And it would be a stretch to claim them as blockchain-specific technologies.
Story No. 3: Blockchains can provide AI bots with funding – and that’s a good thing
Circle CEO Jeremy Allaire has noted that bots are already transacting using cryptocurrency and tweeted that “AI and Blockchains were made for each other.” This is true in the sense that cryptocurrency is a good match for the capabilities of AI agents. But it is also worrying.
Many people worry about AI agents escaping human control. Classic nightmare scenarios include autonomous vehicles killing people or AI-powered autonomous weapons going rogue. But there is another escape vector: the financial system. Money equals power. Give that power to an AI agent and he can do real damage.
This problem is the subject of a research paper I co-authored in 2015/6. My colleagues and I investigated the possibility that smart contracts, programs that autonomously mediate transactions on Ethereum, are being used to facilitate crime. Using the techniques in that article and a blockchain oracle system with access to LLMs (Large Language Models) like ChatGPT, bad actors could in principle launch “rogue” smart contracts that automatically pay bounties for committing serious crimes.
Read more in our opinion section: How a Smart Contract Gets Away with Murder: A Review of ‘The Oracle’
Fortunately, these types of rogue smart contracts are not yet possible in current blockchains – but the blockchain industry and crypto enthusiasts will need to take AI security seriously as a future concern. They will need to consider mitigations such as community-driven interventions or oracle guardrails to help enforce AI safety.
The integration of blockchains and AI holds clear promise. AI can add unprecedented flexibility to blockchain systems by creating natural language interfaces for them. Blockchains can provide new financial and transparent frameworks for model training and data sourcing and put the power of AI in the hands of communities, not just companies.
It’s still in its infancy, though, and as we wax lyrical about AI and blockchain as a tantalizing mix of buzzwords and technologies, we really need to think about it – and see it through.
Ari Juels is the Weill Family Foundation and Joan and Sanford I. Weill Professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion and a member of the Computer Science faculty at Cornell University. He is co-director of the Initiative for CryptoCurrencies and Contracts (IC3). He is also chief scientist at Chainlink Labs. He is the author of the crypto thriller novel The Oracle (Talos Press), which was released on February 20, 2024.