Google Australia believes long term success in mitigating disinformation and foreign influence through social media rests on the development of a culture of online safety across society, including through ongoing “collaboration” between the likes of industry, the technical community, and government.
According to Google, such efforts must be partnered with efforts to educate users and organisations, from school students through to senior citizens and company employees on how to secure their online presence and to “apply critical thinking to the information they see and consume”.
The remarks were made in the company’s submission [PDF] to the Select Committee on Foreign Interference through Social Media, which also contained an overview of the work its parent company has done to counter coordinated influence operations and other government-backed attacks.
In its submission to the committee looking into the risk posed by foreign interference through social media, the local arm of the search giant said it takes its responsibility “very seriously”.
“How companies like Google address these concerns has an impact on society and on the trust users place in our services,” it wrote.
“We believe that meeting it begins with providing transparency into our policies, inviting feedback, enabling users to understand and control their online engagement, and collaborating with policymakers, civil society, and academics around the world in the development of sensible, effective policies, and processes.”
In its submission, Google said algorithms cannot determine whether a piece of content on current events is true or false, nor can they assess the intent of its creator just by reading what’s on a page. It said, however, there are clear cases of intent to manipulate or deceive users.
“For instance, a news website that alleges it contains ‘Reporting from Canberra, Australia’ but whose account activity indicates that it is operated out of Eastern Europe is likely not being transparent with users about its operations or what they can trust it to know firsthand,” Google wrote.
It said the policies across Google Search, Google News, YouTube, and its advertising products outline behaviours that are prohibited to address such situations.
Google said its Threat Analysis Group (TAG) reported disabling influence campaigns originating from groups in Iran, Egypt, India, Serbia, and Indonesia in the first quarter of 2020. It also removed more than a thousand YouTube channels that were apparently part of a large campaign and that were “behaving in a coordinated manner”.
“On any given day, Google’s Threat Analysis Group is tracking more than 270 targeted or government-backed attacker groups from more than 50 countries,” it wrote.
Since the beginning of 2020, Google said it had seen a rising number of attackers, including those from Iran and North Korea, impersonating news outlets or journalists. In April this year, Google sent 1,755 warnings to users whose accounts were targets of government-backed attackers.
“We intentionally send warnings in timed batches to all users who may be at risk, rather than at the moment we detect the threat itself, so that attackers cannot track some of our defence strategies,” the submission said. “We also notify law enforcement about what we’re seeing, as they have additional tools to investigate these attacks.”
The search giant also said it detected 18 million malware and phishing Gmail messages per day related to COVID-19, in addition to more than 240 million COVID-related daily spam messages.
“Our machine learning models have evolved to understand and filter these threats, and we continue to block more than 99.9% of spam, phishing, and malware from reaching our users.
“Google’s TAG has specifically identified over a dozen government-backed attacker groups using COVID-19 themes as lure for phishing and malware attempts — trying to get their targets to click malicious links and download files, including in Australia,” it added.
“We have an important responsibility to our users and to the societies in which we operate to curb the efforts of those who aim to propagate false information on our platforms.”