Meta, formerly Facebook, has reiterated fact-checking of politician claims will not be part of its measures for preventing the spread of misinformation in this year’s Australian federal election.
“The speech of politicians are already very highly scrutinised,” Meta Australia policy head Josh Machin told reporters at a press briefing
“It’s scrutinised by [journalists], but also by academics, experts, and their political opponents who are pretty well-positioned to push back or indicate they don’t believe something’s right if they think they’re being mischaracterised.”
Misinformation that is political in nature and comes from people who are not politicians will be eligible to be fact-checked, however.
In clarifying Meta’s stance about fact-checking politicians, the company said its election integrity measures for Australia’s upcoming federal election are its “most comprehensive” yet.
“This is by far the most comprehensive package of election integrity measures we have ever had in Australia,” Machin said.
The Australia Electoral Commission (AEC) last month said it received assurances from large social media platforms that they would allocate more resources for monitoring election disinformation and misinformation for the upcoming Australian federal election.
As part of these measures, Meta has expanded its third-party fact-checking program in Australia to include RMIT FactLab, which joins Agence France Presse and Australian Associated Press (AAP) to review and rate content on the company’s platforms.
The company has also provided one-off grants to these fact-checking organisations for the intent of bolstering misinformation-detection capabilities during the Australian federal election, but the organisations are not required to use those funds for that purpose.
RMIT FactLab’s services are already being used by Australian media organisations, such as the ABC, but Machin clarified that the services used by Meta are separate from those.
The tech giant is also working with the AAP to re-run the “Check the Facts” media literacy campaign in three additional languages — Vietnamese, Simplified Chinese, and Arabic — as part of efforts to help people recognise and avoid misinformation.
The campaign was expanded to these languages due to them being the three largest non-English speaking communities in Australia, Meta said.
Meta has also partnered with the online transparency organisation First Draft, which will publish related analysis and reporting on their website about online trends to help creators and influencers track what online misinformation might look like during the election campaign.
These measures are in addition to Meta’s LiveDisplay tool, Ad Library that launched last year, and its updated political ad policies, which require advertisers to go through an authorisation process using government-issued photo ID to confirm they are located in Australia. All of these ads are also required to have a publicly visible disclaimer indicating who has paid for the ads.
Meta’s announcement of its election integrity measures come in the face of heavy scrutiny by the federal government, which is looking to enact various new laws that aim to make tech giants more accountable for the content that exists on their platforms. Australian parliamentarians are also undertaking a probe to scrutinise major technology companies and the “toxic material” that resides on their online platforms.
As part of the social media probe, Liberal MP Lucy Wicks last week criticised digital platforms for touting “very strong community standards policies” despite various instances of users not being protected by those standards.
“My concern is that I see very strong community standard policies, or hateful content policies or ‘insert name of keep the community safe’ policies from various platforms. I almost can’t fault them but I find a very big gap with the application of them,” she told Meta during a social media and online safety parliamentary committee hearing.
Wicks’ comments were made in light of 15 female Australian politicians, including herself, being the targets of abusive online comments that were only taken down following law enforcement intervention.