in

44% of people report believing election-related misinformation – Adobe study

Mininyx Doodle/Getty Images

Believing what you see is more difficult than ever due to the ease and accessibility of generating synthetic content and how synthetic content is so easily spread online. As a result, many people have more difficulty trusting what they read, hear, and see in the media and digitally, especially amid politically contentious times like the upcoming US presidential election.

On Tuesday, Adobe released its Authenticity in the Age of AI Study, which surveyed 2,000 US consumers regarding their thoughts on misinformation online ahead of the 2024 presidential election. 

Also: Is that photo real or AI? Google’s ‘About this image’ aims to help you tell the difference

Unsurprisingly, a whopping 94% of respondents reported being concerned about the spread of misinformation impacting the upcoming election, and nearly half of respondents (44%) shared being misled or believing election-related misinformation in the past three months. 

“Without a way for the public to verify the authenticity of digital content, we are approaching a breaking point where the public will no longer believe the things they see and hear online, even when they are true,” said Jace Johnson, VP of Global Public Policy at Adobe.

Also: Amazon joins C2PA steering committee to identify AI-generated content

The emergence of generative AI (gen AI) has played a major factor, with 87% of respondents sharing that technology is making it more challenging to discern between reality and fake online, according to the survey. 

<!–>

This concern for misinformation has concerned users so much that they are taking matters into their own hands and changing their habits to avoid further consuming misinformation. 

For example, 48% of respondents shared they stopped or curtailed the use of a specific social media platform due to the amount of misinformation found on it. Eighty-nine percent of respondents believe social media platforms should enforce stricter measures to prevent misinformation. 

Also: Google’s DataGemma is the first large-scale Gen AI with RAG – why it matters

“This concern about disinformation, especially around elections, isn’t just a latent concern – people are actually doing things about it,” said Andy Parsons, Senior Director of the Content Authenticity Initiative at Adobe in an interview with ZDNET. “There’s not much they can do except stop using social media or curtail their use because they’re concerned that there’s just too much disinformation.” 

In response, 95% of respondents shared that they believe it is important to see attribution details next to election-related content to verify the information for themselves. Adobe positions its Content Credentials, “nutrition labels” for digital content that show users how the image was created, as part of the solution. 

Also: Global telcos pledge to adopt responsible AI guidelines

Users can visit the Content Credentials site and drop an image they want to verify whether it was AI-generated or not. Then, the site can read the image’s metadata and flag if it was created using an AI image generator that automatically implements Content Credentials to AI-generated content, such as Adobe Firefly and Microsoft Image Generator. 

Even if the photo was created with an image that didn’t tag metadata, Content Credentials will match your image to similar images on the internet and let you know whether or not those images were AI-generated. 

Garmin’s newest satellite communicator lets you share photos and voice notes too

Instagram makes ‘Teen Accounts’ private by default – and AI will be checking your age