Tech giants pledge to combat election-related deepfakes as policymakers increase pressure.
Today, at the Munich Security Conference, a group of leading tech companies including Microsoft, Meta, Google, Amazon, Adobe, and IBM, among others, signed an agreement demonstrating their commitment to adopting a common framework to address AI-generated deepfakes designed to deceive voters. Thirteen additional companies, such as AI startups OpenAI, Anthropic, Inflection AI, ElevenLabs, and Stability AI, as well as social media platforms X (formerly Twitter), TikTok, and Snap, joined in signing the accord. Additionally, chipmaker Arm and security firms McAfee and TrendMicro also lent their support to the agreement.
The signatories have agreed to implement methods to detect and label misleading political deepfakes on their platforms, sharing best practices and providing swift and proportionate responses when deepfakes begin to circulate. The companies emphasized their commitment to considering context when responding to deepfakes in order to preserve educational, documentary, artistic, satirical, and political expression, while maintaining transparency with users about their policies regarding deceptive election content.
Some critics may argue that the accord is merely symbolic, as the measures outlined within it are voluntary. However, the tech sector’s eagerness to address regulatory concerns related to elections is evident, especially in a year when 49% of the global population is slated to participate in national elections.
Brad Smith, vice chair and president of Microsoft, emphasized the need for a collaborative, multistakeholder approach to election protection, stating that it is apparent that safeguarding elections requires a joint effort from all parties involved.
While there is no federal law in the U.S. specifically prohibiting deepfakes, several states have enacted statutes criminalizing them, with Minnesota being the first to enact such legislation targeting deepfakes in political campaigns.
In other areas, federal agencies have taken enforcement actions to combat the proliferation of deepfakes. For example, the FTC is seeking to modify an existing rule to prohibit the impersonation of businesses or government agencies for all consumers, including politicians. Similarly, the FCC has moved to make AI-voiced robocalls illegal by reinterpreting a rule prohibiting artificial and prerecorded voice message spam.
Internationally, the European Union’s AI Act seeks to mandate clear labeling of all AI-generated content, and the Digital Services Act aims to compel the tech industry to curb deepfakes in various forms.
Despite these efforts, the prevalence of deepfakes continues to rise. Data from Clarity, a deepfake detection firm, shows a 900% increase in the number of created deepfakes year over year.
Public concern about the spread of misleading video and audio deepfakes is significant, with polls indicating that a majority of Americans are worried about the potential for AI tools to increase the dissemination of false information during the upcoming election cycle.