Generative artificial intelligence (AI) has made identifying synthetic content and protecting user privacy much more challenging. In an effort to improve information literacy and increase data protections, Google has made changes to Search that combat deepfakes and make taking control of your information a little easier.
On Wednesday, the company detailed its improvements to how it handles explicit fake content, or non-consensual deepfakes, in Search. While you have been able to request that Google remove this content from search results for years, Google will now filter all duplicates of that image, as well as explicit results that arise from similar searches about you — not just the search result from the original removal request.
In theory, this should cover more ground in terms of removing harmful content from corners it may be hiding in, even after someone has successfully requested a removal. The process applies to both non-consensual images and fake explicit imagery.
Also: 7 ways to supercharge your Google searches with AI
Google also updated its ranking systems “for queries where there’s a higher risk of explicit fake content appearing in Search,” according to a blog. These updates will prioritize surfacing high-quality, non-explicit results for queries that include people’s names when available.
The company says its updates have already reduced explicit content exposure by more than 70%. The changes aim to surface content that educates users on deepfakes, rather than the deepfakes themselves. Google will also demote sites the company has received many removal requests about.
As part of its Search improvements, Google is also adding its “About this image” contextualizing feature to both Circle to Search and Google Lens. Users can now access the feature seamlessly through both tools.
Say, for example, that a friend texts you an outrageous-looking image — you can simply circle it on your Android device and open the “About this image” tab in Google Search, which will contain information about the photo’s origins based on what the search engine can find. If you’re using Google Lens, you can simply screenshot or download the image in question, open it in the Google app, and hit the Lens icon. This capability is available to both iOS and Android users.
<!–>
“About this image” surfaces information from other sites, including news and fact-checking platforms, that describe the image. This context can help debunk a photo that’s being used out of its original context, for example, or that has been altered to misrepresent information.
Google also refers to the image’s metadata for context on the image’s history and how or when it was created – though metadata can be added or removed by someone when posting an image online. Some of this data can indicate whether an image is synthetic, or generated or edited using AI.
At a briefing and demo attended by ZDNET, Google didn’t specify how it’s able to verify the AI origins of an image, but did note that the technology is still in its rudimentary stages. The “About this image” feature is able to detect whether an image was generated with AI if it contains Google DeepMind’s SynthID watermark, which is embedded in the pixels of any image created using Google’s AI tools.
Also: The best AI search engines of 2024: Google, Perplexity, and more
These moves aim to make finding context for what you see online easier as part of Google’s information literacy initiative. If embraced, media tools like this can help voters navigate an election cycle rife with synthetic political content and misinformation.
Available in 40 languages, “About this image” is accessible now in Circle to Search on the latest Samsung and Pixel phones, foldables and tablets, and on Google Lens, available in the Google app for both Android and iOS.