More stories

  • in

    New Google cloud service aims to bring zero trust security to the web

    Google has announced general availability of BeyondCorp Enterprise, a new security service from Google Cloud based on the principle of designing networks with zero trust. 

    As US security companies come to terms with the SolarWinds supply chain hack, Google and Microsoft are talking up their capabilities in the cloud around zero trust. 
    Microsoft last week urged customers to adopt a “zero trust mentality” and abandon the assumption that everything inside an IT network is safe and now Google has launched the BeyondCorp Enterprise service based around the same concept. 
    “Zero trust assumes there is no implicit trust granted to assets or user accounts based solely on their physical or network location (i.e., local area networks versus the internet) or based on asset ownership (enterprise or personally owned),” explains the National Institute of Standards and Technology (NIST).  
    “Authentication and authorization (both subject and device) are discrete functions performed before a session to an enterprise resource is established.”
    BeyondCorp Enterprise replaces BeyondCorp Remote Access, a cloud service Google announced in April in response to remote working due to the COVID-19 pandemic and the heightened need for virtual private network (VPN) apps. 
    The service allowed employees to securely access their company’s internal web apps from any device and location. Google has been using BeyondCorp for several years internally to protect employee access to apps, data, and other users. 

    “BeyondCorp Enterprise brings this modern, proven technology to organizations so they can get started on their own zero trust journey. Living and breathing zero trust for this long, we know that organizations need a solution that will not only improve their security posture, but also deliver a simple experience for users and administrators,” said Sunil Potti VP of Google Cloud Security. 
    As Microsoft highlighted last week, the three main attack vectors in the SolarWinds attack were compromised user accounts, compromised vendor accounts, and compromised vendor software. These can be significantly mitigated by zero trust principles, such as restricting privileged access to accounts on that need them and enabling multi-factor authentication. It’s encouraging organizations to use Azure Active Directory for identity and access management versus on-premise identity management systems. 
    Google’s main weapon in the fight against sophisticated attackers is Chrome through which it’s promising easy “agentless support”. Chrome has over two billion users, so it has scale too. 
    Then there’s Google’s network with 144 network edge locations across 200 countries and territories, which helps back up its distributed denial of service (DDoS) protection service. 
    Google is encouraging organizations to use the Google Identity-Aware Proxy (IAP) to manage access to apps running in Google Cloud. 
    The pandemic and the SolarWinds hack has made security a bigger value proposition for companies like Microsoft and Google. For the first time, Google parent Alphabet on February 2 will break out cloud revenue as a separate reporting segment starting with its Q4 2020 results.
    Other key security highlights for Chrome under the BeyondCorp Enterprise service include threat protection to prevent data loss and exfiltration and malware infections from the network to the browser; phishing protection; continuous authorization; segmentation between users and apps and between apps and other apps; and management of digital certificates. 
    BeyondCorp Enterprise lets admins check URLs in real-time and scan files for malware; create rules for what types of data can be uploaded, downloaded or copied and pasted across sites; and track malicious downloads on company-issued devices and monitor whether employees enter passwords on known phishing sites. 

    SolarWinds Updates More

  • in

    National Crime Agency warns novice and veteran traders alike of rise in clone company scams

    A warning has been issued by UK watchdogs of a rise in clone company scams targeting those looking for investment opportunities to recover financially from COVID-19.

    On Wednesday, the UK’s National Crime Agency (NCA) and Financial Conduct Authority (FCA) issued an alert to the public concerning “clone company” scams which appear to be claiming not only novice investors but also veteran players in the market.
    The FCA says that these forms of scams are on rise, with increased rates reported since the UK went into its first lockdown during March 2020. 
    In total, investors have lost over £78 million ($107m), a figure which is likely to continue to rise. Average losses are reported as £45,242 per victim, according to Action Fraud research.
    Clone company investment scams go beyond typical phishing emails or dubious social media links promising an immediate return on your cash. Fraudsters use the same name, address, and Firm Reference Number (FRN) issued to authorized investment companies by the FCA and then during phishing, social media, and cold-call messages they send sales materials containing links to legitimate company websites. 
    However, the masquerade only goes so far: once trust is established, investors are hoodwinked into parting with funds intended for the legitimate company, only for their money to go straight into the coffers of scam artists. 
    It may not seem all that different from typical phishing campaigns, but this form of investment fraud technique is not as well-known as it should be. In an FCA survey, 75% of investors said they felt confident enough to spot a scam — but 77% did not know or were unsure of what a clone investment company was. 

    “A clone firm scam can target anyone, they are usually smart fraudsters who often present opportunities which look very tempting indeed,” commented Watchdog presenter Matt Allwright. “When considering your next investment, make sure you only ever use the details listed on the FCA Register, and think about getting impartial advice before going ahead.”
    The NCA recommends that traders reject all unsolicited investment offers whether made online, through social media, or through the phone, and to check both the FCA Register and warning list — as well as any telephone numbers associated with entities — before signing up for financial products. It is also worth seeking independent advice before taking the plunge in a new investment opportunity. 
    Clone company scams that dupe even seasoned investors can be difficult to detect, but this is not the only form of financial fraud that has exploded online since the start of the pandemic. 
    Earlier this month, Interpol warned of a flurry of investment scams taking over dating applications. “Matches” work to obtain a potential victim’s trust and then begin to peddle a fake investment opportunity, encouraging them to join and promising to help them on their way to make a fortune. 
    Once the victim has parted with their cash, the match vanishes and they are locked out of their fake ‘investment’ account. 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Emotet: The world's most dangerous malware botnet was just disrupted by a major police operation

    The world’s most prolific and dangerous malware botnet has been taken down following a global law enforcement operation that was two years in planning.
    Europol, the FBI, the UK’s National Crime Agency and others coordinated action which has resulted investigators taking control of the infrastructure controlling Emotet in one of the most significant disruptions of cyber-criminal operations in recent years.

    see also

    Best VPN services
    Virtual private networks aren’t essential only for securing your unencrypted Wi-Fi connections in coffee shops and airports. Every remote worker should consider a VPN to stay safe online. Here are your top choices in VPN service providers and how to get set up.
    Read More

    Emotet first emerged as banking trojan in 2014 but evolved into one of the most powerful forms of malware used by cyber criminals.
    SEE: A winning strategy for cybersecurity (ZDNet special report) | Download the report as a PDF (TechRepublic)    
    Emotet establishes a backdoor onto Windows computer systems via automated phishing emails that distribute Word documents compromised with malware. Subjects of emails and documents in Emotet campaigns are regularly altered to provide the best chance of luring victims into opening emails and installing malware – regular themes include invoices, shipping notices and information about COVID-19.
    Those behind the Emotet lease their army of infected machines out to other cyber criminals as a gateway for additional malware attacks, including remote access tools (RATs) and ransomware.
    It resulted in Emotet becoming what Europol describes as “the world’s most dangerous malware” and “one of the most significant botnets of the past decade”, with operations like Ryuk ransomware and TrickBot banking trojan hiring access to machines compromised by Emotet in order to install their own malware.

    The takedown of Emotet, therefore, represents one of the most significant actions against a malware operation and cyber criminals in recent years.
    “This is probably one of the biggest operations in terms of impact that we have had recently and we expect it will have an important impact,” Fernando Ruiz, head of operations at Europol’s European Cybercrime Centre (EC3) told ZDNet. “We are very satisfied.”
    A week of action by law enforcement agencies around the world gained control of Emotet’s infrastructure of hundreds of servers around the world and disrupted it from the inside.
    Machines infected by Emotet are now directed to infrastructure controlled by law enforcement, meaning cyber criminals can no longer exploit machines compromised and the malware can no longer spread to new targets, something which will cause significant disruption to cyber-criminal operations.
    “Emotet was our number one threat for a long period and taking this down will have an important impact. Emotet is involved in 30% of malware attacks; a successful takedown will have an important impact on the criminal landscape,” said Ruiz.
    “We expect it will have an impact because we’re removing one of the main droppers in the market – for sure there will be a gap that other criminals will try to fill, but for a bit of time this will have a positive impact for cybersecurity,” he added.
    The investigation into Emotet also uncovered a database of stolen email addresses, usernames and passwords. People can check if their email address has been compromised by Emotet by visiting the Dutch National Police website.
    SEE: Cybersecurity: This ‘costly and destructive’ malware is the biggest threat to your network
    Europol is also working with Computer Emergency Response Teams (CERTs) around the world to help those known to be infected with Emotet.
    In order to help protect against malware threats like Emotet, Europol recommends using anti-virus tools along with fully updated operating systems and software – so cyber criminals can’t exploit known vulnerabilities to help deliver malware. It’s also recommended that users are trained in cybersecurity awareness to help identify phishing emails.
    The Emotet takedown is the result of over two years of coordinated work by law enforcement operations around the world, including the Dutch National Police, Germany’s Federal Crime Police, France’s National Police, the Lithuanian Criminal Police Bureau, the Royal Canadian Mounted Police, the US Federal Bureau of Investigation, the UK’s National Crime Agency, and the National Police of Ukraine.
    The investigation into Emotet, and identifying the cyber criminals responsible for running it, is still ongoing.

    MORE ON CYBERCRIME More

  • in

    Fake ICO consultant sentenced for embezzling cryptocurrency now worth $20 million

    A US resident who masqueraded as a cryptocurrency consultant has been sentenced for embezzling cryptocurrency and cash fraudulently obtained from investors. 

    The US Department of Justice (DoJ) said on Tuesday that Jerry Ji Guo, a resident of San Francisco, will spend six months behind bars and has been ordered to pay $4.4 million in restitution for his activities.
    The 33-year-old former journalist admitted to reshaping himself as an expert and consultant on cryptocurrency and Initial Coin Offerings (ICOs). 
    ICOs are investor events that originally formed to give emerging projects an alternative funding route to angel investment or loans. Participants in legitimate ICOs receive project-branded tokens for their contribution, and should the project succeed, this could allow investors to reap substantial profits. However, ICOs are risky and have paved the way for exit scams and fraud.  
    In Guo’s case, he conned investors by promising he would perform “consultancy, marketing, and publicity services,” according to US prosecutors. However, instead of keeping his promise, investor cash and cryptocurrency — including Bitcoin (BTC) and Ethereum (ETH) ended up being drained from wallets used by companies to deposit funds up-front in order to secure his ‘services.’  
    The cryptocurrencies taken from investors have surged in value over the past few years and the combined funds, with cash, are now worth an estimated $20 million. 
    A federal grand jury indicted Guo in 2018 and he pleaded guilty to one count of wire fraud a year later. Seven other counts of wire fraud were dismissed. At the time of the indictment, Guo faced up to 20 years behind bars.

    Alongside the prison sentence and reparation, Guo will also have to submit to three years of supervised release.
    The DoJ’s Money Laundering and Asset Recovery Section obtained warrants in February 2020 to seize the stolen funds and says that the government “is [now] in a position to return the stolen property to the victims.”
    Earlier this month, US prosecutors sentenced the former owner of RG Coins, Rossen Iossifov, to 10 years in prison after he was found guilty of laundering funds from online auction scams through his cryptocurrency exchange. 
    The DoJ and FBI are constantly hunting down the perpetrators of cryptocurrency-related fraud and schemes, and now, the US Securities and Exchange Commission (SEC) maintains a list of both fiat investment and crypto businesses that consumers should be wary of. 
    In January, SEC added a further eight cryptocurrency organizations to its watch list which tout everything from unrealistic returns to ICO legal protection, and risk-free cryptocurrency trading.
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Chromebooks will now let you sign into websites with your fingerprint

    Chromebook users can sign in to websites with a PIN or fingerprint.
    Image: Getty Images/iStockphoto
    Google has finally brought Web Authentication (WebAuthn) passwordless authentication to Chrome OS to allow users to sign in to websites with a PIN or fingerprint used to unlock a Chromebook.
    WebAuthn allows people to register and authenticate on websites or apps using an “authenticator” – such as a fingerprint or PIN – instead of a password. The World Wide Web Consortium (W3C) made WebAuthn an official web standard in 2019.

    Of course, to take advantage of the Chrome OS version 88 update, people need to have a Chromebook with a fingerprint reader. But the feature also supports a device PIN, which is still easier to remember than passwords for every website. 
    SEE: Managing and troubleshooting Android devices checklist (TechRepublic Premium)
    “Websites that support WebAuthn will let you use your Chromebook PIN or fingerprint ID – if your Chromebook has a fingerprint reader – instead of the password you’ve set for the website,” says Alexander Kuscher, director of Chrome OS.
    Additionally, people who use Google’s two-step verification to sign in to a Google account don’t need to use a security key or phone to authenticate since the Chromebook PIN or fingerprint ID can be used as the second factor. 
    Sites that support WebAuthn include Google, Dropbox, GitHub, Okta, Twitter and Microsoft. Google last year rolled out an update so people with iPhones could use WebAuthn with more types of security keys as the second factor to sign into a Google account.

    As an added bonus, Google has rolled out a feature with Chrome OS 88 that lets students and workers personalize the lockscreen with photos from Google Photos or art gallery images. Chrome OS also lets users check the weather and music playing, as well as control pause, skip and play in a locked state.  
    WebAuthn on Chrome OS devices is likely to be a welcome addition for students who use Chromebooks for remote learning as the COVID-19 pandemic rolls on. These days, the demand for laptops around the clock has forced many parents to buy a cheap laptop, and Chromebooks are a popular option compared to more expensive Windows laptops and macOS laptops. 
    SEE: Cybersecurity: This ‘costly and destructive’ malware is the biggest threat to your network
    Acer in January unveiled the Chromebook Spin 514 convertible laptop with a 14-inch full HD touchscreen, protected by Gorilla Glass, and powered by AMD’s new Ryzen 3000 C-Series mobile processors. 
    At the higher end, Samsung trimmed some features to bring down the cost of its 2-in-1 Galaxy Chromebook. The Galaxy Chromebook 2 features a 13.3-inch QLED display with 1,920×1,080-pixel resolution and comes with an Intel 10th-gen Core i3-10110U or Celeron 5205U processors. The previous model featured a 4K AMOLED display and an Intel Core i5 processor.  More

  • in

    UK association defends ransomware payments in cyber insurance policies

    The Association of British Insurers (ABI) has been accused of “funding” organized crime by including ransomware blackmail payments in cyber insurance policies. 

    As reported by the BBC, ABI said that the inclusion in first-party policies was not “an alternative” to organizations doing everything else possible to mitigate the damage and operational risk caused by cyberattacks, but without it, victims could face “financial ruin.”
    Oxford University’s Prof. Ciaran Martin said that insurers taking this approach were “funding organized crime” and as it remains legal to do so, there are “incentives” to pay up. 
    Ransomware can be one of the most devastating forms of malware to land on corporate networks. Once ransomware executes on a vulnerable system, it will usually encrypt resources, files, and backups, and will then lock users out. 
    A blackmail payment is then demanded in return for a decryption key — which may or may work — often in cryptocurrency such as Bitcoin (BTC), Ethereum (ETH).
    Popular and well-known ransomware strains include WannaCry, Cerber, and Locky.
    Businesses and organizations without viable backups or with an urgent need to restore their systems — such as hospitals and energy utilities — are then under extreme pressure to pay up. 

    It is not illegal to do so in the UK and if they have previously taken out cyber insurance policies covering ransomware, this is when their protection comes into play. 
    A spokesperson for the ABI told the publication that in order for claims to be processed, similar “reasonable precautions” in terms of security have to be met. This is comparable to filing a claim for burglary and whether or not your home had reasonable measures — such as locked doors and windows — in place to prevent theft in the first place. 
    According to US cyber insurance provider Coalition, ransomware incidents accounted for 41% of claims filed during the first half of 2020. 
    This week, credit rating service Moody’s released its 2021 outlook for cybersecurity and cyber-related risks. The agency predicts that the “continued proliferation” of ransomware attacks will force insurers to re-examine their cyber insurance policies and coverage over the coming year. 
    Moody’s predicts that as more claims are made, policies covering ransomware will surge in price in what is a “small, but growing line of business.”
    “Insurers have responded to rising financial losses by raising premium rates and narrowing terms and conditions, including raising deductibles or lowering policy limits, or both,” the company says. “Higher insurance costs, in turn, could weigh on the finances of some organizations, causing them to rethink the purchases of these products.”
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    10-years-old Sudo bug lets Linux users gain root-level access

    A major vulnerability impacting a large chunk of the Linux ecosystem has been patched today in Sudo, an app that allows admins to delegate limited root access to other users.

    The vulnerability, which received a CVE identifier of CVE-2021-3156, but is more commonly known as “Baron Samedit,” was discovered by security auditing firm Qualys two weeks ago and was patched earlier today with the release of Sudo v1.9.5p2.
    In a simple explanation provided by the Sudo team today, the Baron Samedit bug can be exploited by an attacker who has gained access to a low-privileged account to gain root access, even if the account isn’t listed in /etc/sudoers — a config file that controls which users are allowed access to su or sudo commands in the first place.
    For the technical details behind this bug, please refer to the Qualys report or the video below.
    [embedded content]
    While there have been two other Sudo security flaws disclosed over the past two years, the bug disclosed today is the one considered the most dangerous of all three.
    The two previous bugs, CVE-2019-14287 (known as the -1 UID bug) and CVE-2019-18634 (known as the pwfeedback bug), were hard to exploit because they required complex and non-standard sudo setups.
    Things are different for the bug disclosed today, which Qualys said impacts all Sudo installs where the sudoers file (/etc/sudoers) is present — which is usually found in most default Linux+Sudo installs.

    CVE-2021-3156 basically means free root on any setup that has sudo installed, omfg
    — Alba 🌸 (@mild_sunrise) January 26, 2021

    Making matters worse, the bug also has a long tail. Qualys said the bug was introduced in the Sudo code back in July 2011, effectively impacting all Sudo versions released over the past ten years.
    The Qualys team said they were able to independently verify the vulnerability and develop multiple exploit variants for Ubuntu 20.04 (Sudo 1.8.31), Debian 10 (Sudo 1.8.27), and Fedora 33 (Sudo 1.9.2).
    “Other operating systems and distributions are also likely to be exploitable,” the security firm said.
    All in all, the Baron Samedit vulnerabilities is one of the rare Sudo security flaws that can also be successfully weaponized in the real world, in comparison to the previous two bugs disclosed in years prior.
    Qualys told ZDNet that if botnet operators brute-force low-level service accounts, the vulnerability could be abused in the second stage of an attack to help intruders easily gain root access and full control over a hacked server.
    And as ZDNet reported on Monday, these types of botnets targeting Linux systems through brute-force attacks are quite common these days.
    Today’s Sudo update should be applied as soon as possible to avoid unwanted surprises from both botnet operators or malicious insiders (rogue employees). More

  • in

    Predictive policing is just racist 21st century cyberphrenology

    Image: Chris Duckett/ZDNet
    In 1836, the Scottish geologist, chemist, and “agricultural improver” Sir George Stewart Mackenzie was concerned about what he called the “recent atrocities” of violent crime in the British penal colony of New South Wales, Australia.
    The root cause, he thought, was a failure to manage which criminals were transported to work in the colony — especially the two-thirds of convicts who worked for private masters.
    “At present they are shipped off, and distributed to the settlers, without the least regard to their characters or history,” Mackenzie wrote in a representation [PDF] to Britain’s Secretary for the Colonies, Lord Glenelg.
    For Mackenzie it was a moral question. It was about rehabilitating a criminal regardless of “whether the individual have [sic] spent previous life in crime, or has been driven by hard necessity unwillingly to commit it”.
    Only convicts with the correct moral character should be sent to the colonies, to be brought back to “a course of industrious and honest habits”, he wrote.
    The rest could just rot in British prisons.
    So how did Mackenzie propose to identify these convicts with the correct moral character? By measuring the shape of their heads.

    “In the hands of enlightened governors, Phrenology will be an engine of unlimited improving power in perfecting human institutions, and bringing about universal good order, peace, prosperity, and happiness,” he wrote.
    Yes, in 1836, phrenology was promoted as a cutting-edge science that could predict, among many other things, a person’s likelihood of criminality. Now, of course, we know that it’s complete rubbish.
    Here in the 21st century, predictive policing, or algorithmic policing, makes similarly bold claims about its ability to spot career criminals before they commit their crimes.
    How predictive policing can entrench racist law enforcement
    At its core, predictive policing is simply about using the magic of big data to predict when, where, and by whom crime is likely to be committed.
    The payoff is meant to be a more efficient allocation of police resources, and less crime overall.
    Increasingly, it’s also about ubiquitous facial recognition technology.
    An important player here is the secretive company Clearview AI, a controversy magnet with far-right political links.
    Clearview’s tools have already been used by Australian Federal Police and police forces in Queensland, Victoria, and South Australia, though it took journalists’ investigations and a massive data breach to find that out.
    The Royal Canadian Mounted Police even denied using Clearview’s technology three months after they’d signed the contract.
    The potential payoff to all this isn’t just identifying and prosecuting criminals more efficiently after the fact.
    Increasingly, it’s also the idea that individuals who have been predicted to be potential criminals, or whose behaviour matches some predicted pattern for criminal behaviour, can be identified and tracked.
    At one level, predictive policing simply provides some science-ish rigour to the work of the cops’ own in-house intelligence teams.
    “Looking at crimes like burglary, one can create quite a useful predictive model because some areas have higher rates of burglary than others and there are patterns,” said Professor Lyria Bennett Moses, director of the Allens Hub for Technology, Law and Innovation at the University of New South Wales, last year.
    Cops also know, for example, that drunken violence is more likely in hot weather. An algorithm could help them predict just when and where it’s likely to kick off based on past experience.
    According to Roderick Graham, an associate professor of sociology at Old Dominion University in Virginia, there are more innovative ways of using data.
    Suppose the cops are trying to identify the local gang leaders. They’ve arrested or surveilled several gang members, and through “either interrogation, social media accounts, or personal observation”, they now have a list of their friends, family, and associates.
    “If they see that a person is connected to many gang members, this gives police a clue that they are important and maybe a leader,” Graham wrote.
    “Police have always done this. But now with computer analyses, they can build more precise, statistically sound social network models.”
    But this is where it all starts to get wobbly.
    As American researchers William Isaac and Andi Dixon pointed out in 2017, while police data is often described as representing “crime”, that’s not quite what’s going on.
    “Crime itself is a largely hidden social phenomenon that happens anywhere a person violates a law. What are called ‘crime data’ usually tabulate specific events that aren’t necessarily lawbreaking — like a 911 call — or that are influenced by existing police priorities,” they wrote.
    “Neighbourhoods with lots of police calls aren’t necessarily the same places the most crime is happening. They are, rather, where the most police attention is — though where that attention focuses can often be biased by gender and racial factors.”
    Or as Graham puts it: “Because racist police practices overpoliced black and brown neighbourhoods in the past, this appears to mean these are high crime areas, and even more police are placed there.”
    Bennett Moses gave a distinctly Australian example.
    “If you go to police databases in Australia and look at offensive language crimes, it looks like it is only Indigenous people who swear because there isn’t anyone else who gets charged for it,” she wrote.
    “So you have a bias there to start within the data, and any predictive system is going to be based on historical data, and then that feeds back into the system.”
    Cops don’t want to talk about predictive policing
    In 2017, NSW Police’s Suspect Targeting Management Plan (STMP) singled out children as young as 10 for stop-and-search and move-on directions whenever police encountered them.
    The cops haven’t really explained how or why that happens.
    According to the Youth Justice Coalition (YJC) at the time, however, the data they’ve managed to obtain shows that STMP “disproportionately targets young people, particularly Aboriginal and Torres Strait Islander people”.
    According to an evaluation of STMP in 2020 by the respected NSW Bureau of Crime Statistics and Research, “STMP continues to be one of the key elements of the NSW Police Force’s strategy to reduce crime”.
    The roughly 10,100 individuals subject to SMTP-II since 2005, and the more than 1,020 subjected to an equivalent system for domestic violence cases (DV-STMP), were “predominately male and (disproportionately) Aboriginal”, they wrote.
    Yet when compared with non-Aboriginal people, the Aboriginal cohort in the sample saw a “smaller crime reduction benefit”.
    Victoria Police has thrown the veil of secrecy over their own predictive policing tool. They haven’t even released its name.
    The trial of this system only became public knowledge in 2020 when Monash University associate professor of criminology Leanne Weber published her report on community policing in Greater Dandenong and Casey.
    In interviews with young people of South Sudanese and Pacifika background, she heard how, at least in your correspondent’s view, racism is being built into the data from the very start.
    “Many experiences reported by community participants that appeared to be related to risk-based policing were found to damage feelings of acceptance and secure belonging,” she wrote.
    “This included being prevented from gathering in groups, being stopped and questioned without reason, and being closely monitored on the basis of past offending.”
    One participant seemed to nail what was going on: “The police don’t give a reason why they are accusing them. It’s so that the police can check and put it in their system.”
    Victoria Police told Guardian Australia that further details about the tool could not be released because of “methodological sensitivities”, whatever they are.
    It’s telling, however, that this secret tool was only used in Dandenong and surrounding Melbourne suburbs, one of the most disadvantaged and “culturally diverse” regions in Australia.
    More detailed explorations of predictive policing tools put it bluntly, like this headline at MIT Technology Review: Predictive policing algorithms are racist. They need to be dismantled.
    Or as John Lorinc wrote in his lengthy feature for the Toronto Star, “big data policing is rife with technical, ethical, and political landmines”.
    The pushback against predictive policing is underway
    At the global level, the United Nations Committee on the Elimination of Racial Discrimination has warned [PDF] how predictive policing systems that rely on historical data “can easily produce discriminatory outcomes”.
    “Both artificial intelligence experts and officials who interpret data must have a clear understanding of fundamental rights in order to avoid the entry of data that may contain or result in racial bias,” the committee wrote.
    In the UK, the Centre for Data Ethics and Innovation has said that police forces need to “ensure high levels of transparency and explainability of any algorithmic tools they develop or procure”.
    In Europe, the EU Commission’s vice president Margrethe Vestager said predictive policing is “not acceptable”.
    Individual cities have been banning facial recognition for policing, including Portland, Minneapolis, Boston and Somerville in Massachusetts, Oakland, and even tech hub San Francisco.
    At least the phrenologists were open and transparent
    Back in 1836, Mackenzie’s proposal went nowhere, despite his hard sell and offer to prove his plan with an experiment.
    “I now put into your hands a number of certificates from eminent men, confirming my former assertion, that it is possible to classify convicts destined for our penal settlements, so that the colonists may be freed from the risk of having atrocious and incorrigible characters allotted to them, and the colonial public from the evils arising out of the escape of such characters,” he wrote.
    Lord Glenelg, it turns out, wasn’t convinced that phrenology was a thing, and, in any event, he didn’t have the funding for it.
    The irate skull-fondlers expressed their dismay in The Phrenological Journal and Magazine of Moral Science for the year 1838 [PDF], even blaming the colonial governors for the violent crimes.
    “As phrenologists, we must assume (and we assume this, because we speak on the strength of undeniable facts,) that the occurrence of such outrages might be much diminished, if not wholly prevented; and consequently, we must regard those to whom the power of prevention is given, but who refuse to exert that power, as morally guilty of conniving at the most deadly crimes,” they wrote.
    The cops keep drinking the Kool-Aid
    There are three key differences between predictive policing in 2021 and 1836.
    First, the secrecy.
    Mackenzie “unhesitatingly” offered a public test of phrenology in front of Lord Glenelg and “such friends as you may wish to be present”. Today, it’s all confidential proprietary algorithms and police secrecy.
    Second, the gullibility.
    Even in a time of great faith in science and reason, Lord Glenelg was sceptical. These days the cops seem to drink the Kool-Aid as soon as it’s offered.
    And third, the morality, or rather, the lack of it.
    Whatever you may think of Mackenzie’s promotion of what we now know to be quackery, his overall aim was the moral improvement of society.
    He spoke out against the “ignorance of the human constitution” which led rulers to think that “degradation is… the fitting means to restore a human being to self-respect, and to inspire an inclination towards good conduct”.
    Among cops and technologists alike, a coherent discussion of ethics and human rights seems to be lacking. That must be fixed, and fixed soon.
    Related Coverage More