Google has announced plans to introduce changes to its Search feature, aiming to provide clearer identification of images that have been created or modified using AI tools. Over the coming months, Google will start flagging AI-generated and AI-edited images in the “About this image” section on platforms like Google Search, Google Lens, and the Android-exclusive Circle to Search feature. These disclosures could extend to other Google platforms, such as YouTube, though more details on this will be revealed later this year.
The key point is that only images containing “C2PA metadata” will be marked as AI-altered in Search. C2PA, or the Coalition for Content Provenance and Authenticity, is working on standards to track an image's origin, including the devices and software used to capture or create it. This initiative is supported by major companies like Google, Amazon, Microsoft, OpenAI, and Adobe. However, as highlighted by The Verge, C2PA’s standards face adoption and compatibility issues, with only a few generative AI tools and select cameras from Leica and Sony currently supporting them.
Furthermore, C2PA metadata can be removed, corrupted, or rendered unreadable, making it a less than perfect solution. Some widely-used AI tools, such as Flux (utilized by xAI's Grok chatbot for image generation), do not incorporate C2PA metadata, partly because their developers have not endorsed the standard.
Despite these challenges, these measures are a step in the right direction amid the growing prevalence of deepfakes. One report estimates a 245% increase in scams using AI-generated content between 2023 and 2024. According to Deloitte, losses related to deepfakes are expected to skyrocket from $12.3 billion in 2023 to $40 billion by 2027. Public surveys also indicate that most people are worried about being misled by deepfakes and the potential of AI to spread propaganda.
Source: https://techcrunch.com/2024/09/17/g...i-generated-images-in-search-later-this-year/
The key point is that only images containing “C2PA metadata” will be marked as AI-altered in Search. C2PA, or the Coalition for Content Provenance and Authenticity, is working on standards to track an image's origin, including the devices and software used to capture or create it. This initiative is supported by major companies like Google, Amazon, Microsoft, OpenAI, and Adobe. However, as highlighted by The Verge, C2PA’s standards face adoption and compatibility issues, with only a few generative AI tools and select cameras from Leica and Sony currently supporting them.
Furthermore, C2PA metadata can be removed, corrupted, or rendered unreadable, making it a less than perfect solution. Some widely-used AI tools, such as Flux (utilized by xAI's Grok chatbot for image generation), do not incorporate C2PA metadata, partly because their developers have not endorsed the standard.
Despite these challenges, these measures are a step in the right direction amid the growing prevalence of deepfakes. One report estimates a 245% increase in scams using AI-generated content between 2023 and 2024. According to Deloitte, losses related to deepfakes are expected to skyrocket from $12.3 billion in 2023 to $40 billion by 2027. Public surveys also indicate that most people are worried about being misled by deepfakes and the potential of AI to spread propaganda.
Source: https://techcrunch.com/2024/09/17/g...i-generated-images-in-search-later-this-year/