Controversial facial recognition service Clearview AI has been marketing directly to individual Australian police officers, encouraging them to make 100+ searches during free trials, and to refer the service to their colleagues.
In emails to Victoria Police [PDF] obtained under freedom of information laws by analyst Justin Warren last week, “Team Clearview” signed up at least six of the state’s officers between November 2019 and March 2020.
Clearview’s promotional language seems more suited to a consumer social media app than a law enforcement tool.
“Clearview is like Google Search for faces. Just upload a photo to the app and instantly get results from mug shots, social media, and other publicly available sources,” said one email to a police intelligence analyst.
“Search a lot. Your Clearview account has unlimited searches,” encouraged a follow-up email.
Image: Victoria Police
“Don’t stop at one search. See if you can reach 100 searches. It’s a numbers game. Our database is always expanding and you never know when a photo will turn up a lead,” it said.
“Take a selfie with Clearview or search a celebrity to see how powerful the technology can be.”
Another email was even more enthusiastic. “Feel free to run wild with your searches. Test Clearview to the limit and see what it can do,” it said.
Victoria Police is distancing itself from Clearview, telling the Guardian that only a small number of email addresses were registered, it was not used in any investigations, and police have discontinued using it.
“Victoria Police uploaded a small number of publicly available stock images to Clearview AI to test the technology. No images linked to any investigation by Victoria Police were uploaded as part of this testing process,” a police spokeswoman said.
Clearview AI is a controversy magnet with far-right links
Clearview, founded by Australian hacker and entrepreneur Hoan Ton-That, is no stranger to controversy. When its customer database was leaked in February this year, its response was cavalier.
“Security is Clearview’s top priority,” the company said through its lawyer.
“Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw and continue to work to strengthen our security.”
This data breach led to the revelation that the Australian Federal Police had also used Clearview.
Seven officers from the Australian Centre to Counter Child Exploitation (ACCCE) had conducted searches, yet no one outside the ACCCE Operational Command knew this trial had commenced.
Police services in Queensland, Victoria, and South Australia were also on the list.
Clearview’s strategy seems clear, at least in your correspondent’s view: Get individual cops hooked, so investigations become dependent on the service before higher-ups have had the chance to consider the legal, privacy, and ethical issues.
No wonder police forces have been cagey about whether they’re using the technology.
An even greater cause for concern, however, is Clearview’s links to far-right political forces in the US — links they seem eager to hide.
A detailed investigation by the Huffington Post published in April joined the dots connecting Ton-That with self-proclaimed neo-Nazi hacker Andrew ‘weev’ Auernheimer, pro-Trump propagandist Mike Cernovich of Pizzagate conspiracy theory fame, and many others.
“In this far-right clique, two of Ton-That’s associates loomed larger than most thanks to their close connection to billionaire Peter Thiel, a Facebook board member and Trump adviser: Jeff Giesea, a Thiel protégé and secret funder of alt-right causes, and Charles ‘Chuck’ Johnson, a former Breitbart writer and far-right extremist who reportedly coordinated lawfare against media organizations with Thiel,” HuffPost wrote.
Johnson reportedly introduced Ton-That to someone as “a gifted coder he’d hired to build the facial recognition tool”.
“Around the same time, Johnson stated on Facebook that he was ‘building algorithms to ID all the illegal immigrants for the deportation squads’,” HuffPost wrote.
If the 3600-word report can be summarised at all, it’s this: Clearview is linked, somehow, both to far-right racist politics and increasingly to law enforcement agencies around the world, and it wants to hide those links.
Clearview also seems to show little concern for playing by the rules.
Where do these photographs come from exactly?
As detailed in The New York Times in February, Clearview had a database of 3 billion photos, collected from websites such as YouTube, Facebook, Venmo, and LinkedIn.
As reported by sister site CNET, tech giants like Google, Facebook, and Microsoft have sent Clearview AI cease-and-desist letters for scraping images hosted on their platforms.
Australian privacy commissioner Angelene Falk wants to know whether data on Australians has been collected.
In the wake of the US protects against racist law enforcement practices, major players including IBM, Amazon, and Microsoft have halted the sale of facial recognition tech to American police forces. The technology is now only a small part of their overall offerings.
But for Clearview, cops and photos are the main game.
Clearview AI is clearly a company to watch, but not in a good way.
Related Coverage
Microsoft reiterates it won’t sell facial-recognition tech to police until federal regulation passed
Microsoft President Brad Smith is continuing to champion regulation of facial-recognition tech, saying Microsoft has not sold such technology, to date, to U.S. law enforcement.
IBM announces exit of facial recognition business
Big Blue fears the technology could be used to promote racial discrimination.
AFP used Clearview AI facial recognition software to counter child exploitation
Seven officers have conducted searches on the Clearview AI facial recognition platform.
Face masks prompt London police to consider pause in rollout of facial recognition cameras
The controversial scheme may be halted due to the widespread adoption of face coverings.