Over the past 24 hours, users have reported that their Facebook posts relating to the novel coronavirus have been vanishing.
In these cases, users have been notified that their posts violate community standards, a boilerplate notification sent when posts are removed as they are deemed to be fake, fraudulent, or steeped in misinformation.
Facebook’s community standards “outline what is and is not allowed on Facebook […] based on feedback from our community and the advice of experts in fields such as technology, public safety, and human rights.” Posts reported for breaking these rules are often linked to violence, extremist content, or scams.
See also: Tech tips and tricks to cope with coronavirus social isolation measures
With confusion and anxiety surrounding the COVID-19 outbreak, it is more important than ever to prevent the spread of misleading or fake information — such as posts that claim specific products are cures or claims that the respiratory illness was developed as a bioweapon — however, many legitimate articles from reputable sources have also been removed from the social media platform.
As complaints surged on both Facebook and Twitter, prompting a flurry of censorship accusations, the former CSO of Facebook Alex Stamos surmised that the issue was likely caused by an “anti-spam rule going haywire.”
“Facebook sent home content moderators yesterday, who generally can’t WFH [work from home] due to privacy commitments the company has made,” the security expert tweeted. “We might be seeing the start of the ML going nuts with less human oversight.”
Guy Rosen, the VP of Integrity at Facebook, confirmed there was a problem with an anti-spam rule, adding that the issue was “unrelated to any changes in our content moderator workforce.”
CNET: Elections amid coronavirus: How officials aim to keep voters safe
While restoration is underway, the case does highlight that if content moderators are sent home, a lack of oversight could cause moderation problems, leading to a scenario that every social network wants to avoid — a rampant spread of fraudulent content.
As noted by Stamos, this scenario is a tough one for Facebook and other companies. The executive said you can have consistent content moderation, you can be able to protect privacy by keeping content moderators in controlled IT environments, or you can protect your staff by sending them home — but not all three.
TechRepublic: Coronavirus: What business pros need to know
YouTube has already reached this point, warning users on Monday that automated systems will be taking over from human moderators unable to work due to social distancing — and as a result, errors in takedowns are anticipated.
Furthermore, the video streaming platform said in a blog post that while content creators will be able to appeal removals, “our workforce precautions will also result in delayed appeal reviews.”
“We’ll also be more cautious about what content gets promoted, including live streams,” YouTube added. “In some cases, unreviewed content may not be available via search, on the homepage, or in recommendations.”
Earlier this week, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube published a joint statement, promising that each tech giant would fight disinformation surrounding COVID-19. The firms are now working together, with input from government healthcare agencies, to fight fake news and fraud.
Previous and related coverage
Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0