Google will begin flagging AI-generated images in Search later this year

Cpvr

Engaged Member
Community Moderator
Google has announced plans to introduce changes to its Search feature, aiming to provide clearer identification of images that have been created or modified using AI tools. Over the coming months, Google will start flagging AI-generated and AI-edited images in the “About this image” section on platforms like Google Search, Google Lens, and the Android-exclusive Circle to Search feature. These disclosures could extend to other Google platforms, such as YouTube, though more details on this will be revealed later this year.

The key point is that only images containing “C2PA metadata” will be marked as AI-altered in Search. C2PA, or the Coalition for Content Provenance and Authenticity, is working on standards to track an image's origin, including the devices and software used to capture or create it. This initiative is supported by major companies like Google, Amazon, Microsoft, OpenAI, and Adobe. However, as highlighted by The Verge, C2PA’s standards face adoption and compatibility issues, with only a few generative AI tools and select cameras from Leica and Sony currently supporting them.

Furthermore, C2PA metadata can be removed, corrupted, or rendered unreadable, making it a less than perfect solution. Some widely-used AI tools, such as Flux (utilized by xAI's Grok chatbot for image generation), do not incorporate C2PA metadata, partly because their developers have not endorsed the standard.

Despite these challenges, these measures are a step in the right direction amid the growing prevalence of deepfakes. One report estimates a 245% increase in scams using AI-generated content between 2023 and 2024. According to Deloitte, losses related to deepfakes are expected to skyrocket from $12.3 billion in 2023 to $40 billion by 2027. Public surveys also indicate that most people are worried about being misled by deepfakes and the potential of AI to spread propaganda.

Source: https://techcrunch.com/2024/09/17/g...i-generated-images-in-search-later-this-year/
 
This is actually really interesting! I can understand why they would want to flag AI-generated or modified images especially if people are trying to scam others using these AI-generated or modified images and telling people it is their own.
 
I support the fight against misinformation and deep fakes. I think Google has made the right choice. I discovered that its effectiveness rely hugely on industry-wide C2PA adoption. I keep my fingers crossed on the development.
 
This is great for numerous reasons. Scams come to the top of my mind, but this move will also.allownactual photographers and designers to thrive again. It may take time but things will start looking up for them soon.
 
I use AI generated images on my blog posts as well as on print on demand products. I have also tried selling AI generated art but I have not been successful. I think it is very important to let people know whether the image is creativity of an individual or a machine processed data.
 
This type of flagging really needs to be made available to Facebook. My older friends and relatives haven't really caught on that a lot of the things they're sharing were made by AI, and the things they've shared have mostly been concepts that tug at their heartstrings. I've tried to tell them that their images are AI, but they still aren't very careful. Maybe I need to do a little training class for them to help them detect those things?
 

Users who are viewing this thread

Back
Top