As AI continues to work its way into our daily lives, it’s hard not to see the impact it’s already having on almost every sector. Within the financial sector, for example, AI enables smarter investments, analyzes market trends and predicts stock performance, ultimately helping individuals and institutions make more informed business decisions.
While most developments in AI are exciting and continue to advance several industries, there are also companies that are misusing the technology for even more nefarious purposes. With generative AI, “deepfakes” have been identified as one of the biggest risks that individuals and organizations need to be aware of.
You are reading Crypto long and shortour weekly newsletter with insights, news and analyzes for the professional investor. Register here to receive it in your inbox every Wednesday.
Deepfakes are highly realistic digital forgeries produced with AI to manipulate or generate visual and/or audio content. For example, deepfake could involve an AI-generated video in which a celebrity takes actions or makes statements that never actually happened, such as when comedian Jordan Peele created a deepfake of Barack Obama to demonstrate the threat that AI-generated technology could pose. may arise.
While we may default to believing what we see, this type of faked or misleading AI-generated content is becoming increasingly common. Between 2022 and the first half of 2023, deepfakes will be part of the content in the US. increased nearly thirteen times from 0.2% to 2.6%, according to a recent report from Sumsub Research.
Experts are already concerned that deepfakes could be used to sway public opinion or influence major events such as elections, with bad actors trying to use AI to impersonate elected officials. They are “completely terrified” that the upcoming presidential race will be a “tsunami of disinformation,” heavily driven by deepfake and misleading AI-generated content, according to another recent report. Many see deepfakes’ ability to blur the lines between truth and fiction as a fundamental threat to democracies and fair elections around the world.
So how can we – as a society – limit the prevalence and risks of deepfakes, as well as similar risks that may arise as generative AI only becomes more sophisticated?
Blockchains could be the critical technology we need to tackle this problem. At their core, public blockchains, such as Ethereum, have several key features that make them uniquely positioned to establish authenticity for content and information. This includes blockchain’s inherent transparency, its decentralized nature, and its focus on network security and immutability.
For those unfamiliar, a public blockchain transparently records information in a time-bound manner, accessible to everyone, worldwide, and without gatekeepers. This allows anyone to verify the validity of information, such as its creator or a timestamp, making it a source of truth. Public blockchains are also decentralized, eliminating the need for a central decision maker and reducing the risk of manipulation. This decentralized structure also provides high network security by eliminating single points of failure and ensuring an immutable and tamper-proof record.
Furthermore, blockchains have already demonstrated their ability to authenticate content. For example, with digital art as non-fungible tokens (NFTs), blockchain technology allows anyone to verify the creator and owner of a work of art, allowing us to distinguish between the original and its potential replicas. This transparency and authentication potential extends to videos, images and text, providing important foundations for developers to create solutions and tools aimed at combating deepfakes, such as OpenAI’s Worldcoin, Irys and Numbers Protocol.
As AI’s impact on society grows, AI-generated content and deepfakes will only become more prominent. Harvard experts are already predicting that more than 90% of online content will be generated by AI in the future. To protect ourselves from threats such as deepfakes, it is crucial that we get ahead of the problem and implement innovative solutions. Public blockchains, which are collectively owned and controlled by users, offer promising features such as network security, transparency and decentralization that could help combat the problems deepfakes pose.
However, much of the ongoing work is still in its early stages and challenges remain in the technical development and widespread adoption of blockchain-related protocols. While there is no quick fix, we must remain committed to shaping a future that maintains truth, integrity, and transparency as our society navigates these emerging technologies (and the risks they bring) together.
Note: The views expressed in this column are those of the author and do not necessarily reflect those of CoinDesk, Inc. or its owners and affiliates.