Deepfake videos featuring Apple CEO Tim Cook recently flooded YouTube, coinciding with the company’s ‘Glowtime’ event. These videos, designed to trick users into investing in cryptocurrencies, were created using advanced AI tools that replicated Cook’s likeness and voice.
The live streams appeared on a YouTube channel that looked almost identical to Apple’s official channel, complete with a fake verification badge. This made it difficult for viewers to distinguish them from real content. After a flood of reports from concerned users, the videos were quickly removed by YouTube.
According to reports, AI-generated Tim Cook appeared to promote a “get rich quick” scheme during the livestream. He said,
“Once you complete your deposit, the system will automatically process it and return double the amount of cryptocurrency you deposited.”
Tim Cook is joining Elon Musk’s bandwagon
However, this is not the first case of deepfake technology being misused to promote crypto scams. For example, AI-generated videos of Elon Musk have previously been used in a similar way, with his public persona exploited to convince unsuspecting viewers to invest in fraudulent crypto schemes.
Here it’s worth pointing out that Tesla’s CEO is no stranger to these types of concerns. He recently became a party to a lawsuit alleging that Musk artificially inflated the value of Dogecoin, the largest memecoin on the market. Although this was dismissed quickly enough, the use of AI-generated Musk means that his likeness is often used for real-world scams.
At the time, AMBCrypto’s report quoted Musk’s lawyers as saying:
“There is nothing illegal about tweeting messages of support for, or funny photos about, a legitimate cryptocurrency.”
What do social media platforms do?
Social media platforms such as YouTube and Twitter have been actively working to combat such scams. YouTube uses a combination of machine learning algorithms and manual reviews to detect and remove fake content, while Twitter uses advanced AI to identify suspicious activity and remove accounts that promote fraudulent schemes.
Furthermore, they deploy automated detection systems and rely on user reports to identify fraudulent content.
The rise of deepfake fraud underlines the urgent need for improved digital literacy and security measures. As these fraudulent tactics become more sophisticated, users must remain vigilant and platforms must continue to develop better detection tools to protect their communities from financial harm.
At the same time, governments and regulators around the world are also considering new policies and technologies to address the growing risks associated with deepfake media and crypto scams. The ongoing fight against this type of fraud requires a coordinated effort from technology companies, users and regulators to stay ahead of cybercriminals who are constantly adapting their tactics to exploit new vulnerabilities.