Fox Corp. created a stir in media circles on Tuesday when it announced it was launching ‘Verify’, a new blockchain-based tool for verifying the authenticity of digital media in the age of AI.
The project tackles a number of increasingly difficult problems: AI makes it easier for “deepfake” content to mislead readers, and publishers often find that their content has been used without permission to train AI models.
A cynical view might be that this is all just a big PR move. Merging “AI” and “Blockchain” into a buzzword stew to help build “trust in news” feels like great press fodder, especially if you’re an aging media conglomerate with credibility issues. We’ve all seen it Succession, right?
But let’s put the irony aside for a moment and take Fox and its new tool seriously. On the deepfake side, Fox says people can load URLs and images into the Verify system to determine if they are authentic, meaning a publisher has added them to the Verify database. On the licensing side, AI companies can use the Verify database to access (and pay for) content in a compliant manner.
Blockchain Creative Labs, Fox’s in-house technology division, teamed up with Polygon, the low-cost, high-throughput blockchain that works on top of the vast Ethereum network to power things behind the scenes. Adding new content to Verify essentially means adding an item to a database on the Polygon blockchain, where its metadata and other information is stored.
Unlike so many other crypto experiments, this time around, the blockchain tie-in may have a point: Polygon gives content on Verify an immutable audit trail, and it ensures that third-party publishers don’t have to trust Fox to manage their data.
In its current state, Verify feels a bit like a glorified database checker, a simple web app that uses Polygon’s technology to track images and URLs. But that doesn’t mean it’s useless – especially when it comes to helping legacy publishers navigate licensing deals in the world of big language models.
Verify for consumers
We went ahead and uploaded some content to Verify’s web app to see how well it works in everyday use, and it didn’t take long to notice the app’s limitations for consumer use.
The Verify app has a text input box for URLs. When we inserted Tuesday’s Fox News article about Elon Musk and deep fakes (which happened to be prominently featured on the site) and pressed enter, a bunch of information appeared confirming the article’s provenance. Along with a transaction hash and signature – data for the Polygon blockchain transaction that represents the piece of content – the Verify app also showed the article’s associated metadata, licensing information, and a series of images that appear within the content.
We then downloaded one of those images and re-uploaded it to the tool to see if it could be verified. When we did that, we were presented with similar data to what we saw when we entered the URL. (When we tried a different image, we could also click a link to see other Fox articles that used the image. Cool!)
While Verify accomplished these simple tasks as advertised, it’s hard to imagine that many people will need to “verify” the source of the content they pulled directly from the Fox News website.
In the documentation, Verify suggests that a potential user of the service could be someone who comes across an article on social media and wants to find out whether it comes from a supposed source. When we ran Verify through this real-world scenario, we encountered issues.
We found an official Fox News post on X (the platform formerly known as Twitter) containing the same article we originally verified, and we then uploaded the Although clicking the
We then grabbed the thumbnail image from the Fox News report on the screen: one of the same Fox images we uploaded last time. This time we were told the image could not be verified. It turns out that if an image is manipulated in any way (including slightly cropped thumbnails or screenshots that are not exactly sized) the Verify app goes haywire.
Some of these technical shortcomings will certainly be resolved, but there are even more complicated technical issues that Fox will face as it tries to help consumers sort through AI-generated content.
Even if Verify works as advertised, it can’t tell you if the content is AI-generated – only that it came from Fox (or whatever other source uploaded it, assuming other publishers use Verify in the future) . If the goal is to help consumers distinguish AI-generated content from human content, this won’t help. Even trusted news outlets like Sports Illustrated have become so embroiled in controversy for its use of AI-generated content.
Then there is the problem of user apathy. People tend not to worry much about whether what they read is true, as Fox undoubtedly knows. This is especially true as humans wants something to be true.
For something like Verify to be useful to consumers, I think it would have to be built directly into the tools people use to view content, such as web browsers and social media platforms. You could imagine some kind of badge, à la community notes, which appear on content added to the Verify database.
Verify for publishers
It feels unfair to use this barebones version of Verify, as Fox was quite proactive in labeling it as beta. Fox also doesn’t just target general media consumers, as we did in our testing.
Fox’s partner, Polygon, said in a press release shared with CoinDesk that “Verify provides a technical bridge between media companies and AI platforms” and has additional features to “help create new commercial opportunities for content owners by using smart contracts to set programmatic terms and conditions to access the content.”
While the details here are somewhat vague, the idea seems to be that Verify will serve as a kind of global database for AI platforms that scour the internet for news content – giving AI platforms a way to gather authenticity and for publishers to to ‘gate’ their content. behind licensing restrictions and paywalls.
Verify would probably need the support of a critical mass of publishers and AI companies to make this kind of thing work; for now, the database only contains about 90,000 articles from Fox-owned publishers, including Fox News and Fox Sports. The company also says it has opened the door for other publishers to add content to the Verify database, and has also made its code open source for those looking to create new platforms based on its technology.
Even in its current state, licensing Verify seems like a solid idea – especially in light of the thorny legal questions publishers and AI companies are currently considering.
In a recently filed lawsuit against OpenAI and Microsoft, the New York Times has alleged that its content was used without permission to train AI models. Verify could provide a standard framework for AI companies to access online content, giving news publishers an upper hand in their negotiations with AI companies.