Recently, AI-generated images have started dominating Google search results, making it challenging for users to find relevant information. To address this issue, Google announced that they will start labeling AI-generated and edited images in their search results in the upcoming months.
Google plans to tag such content through the “About this image” window in Search, Google Lens, and Android’s Circle to Search features. Additionally, they are applying this technology to their ad services and are considering extending it to YouTube videos in the future.
To identify AI-generated images, Google will utilize metadata from the Coalition for Content Provenance and Authenticity (C2PA), of which they are a steering committee member. This metadata will track an image’s origin, creation details, and the tools used to generate it.
Several key players in the industry have joined C2PA, but adoption of the standard by hardware manufacturers remains limited. The rise of AI-generated deepfakes has led to a surge in online scams, with instances of fraud using deepfakes on the rise globally.
David Fairman, from Netskope, highlighted the increased accessibility of deepfake technology, which has enabled cybercriminals to exploit it for fraudulent activities easily.