in

Apple to refuse government demands of expanding scanning beyond child abuse


Image: Apple

Apple has produced an FAQ [PDF] in response to criticism levelled at it after announcing plans to have devices scan for child abuse material in images uploaded to iCloud.

The child sexual abuse material (CSAM) detection system will have devices running iOS 15, iPadOS 15, watchOS 8, and macOS Monterey matching images on the device against a list of known CSAM image hashes provided by the US National Center for Missing and Exploited Children (NCMEC) and other child safety organisations before an image is stored in iCloud.

If a hashing match is made, metadata that Apple is calling “safety vouchers” will be uploaded along with the image, and once an unnamed threshold is reached, Apple will manually inspect the metadata and if it regards it as CSAM, the account will be disabled and a report sent to NCMEC.

Much of the criticism has revolved around the idea that even if Apple was well-intentioned and currently limited, the system could be expanded by Apple alone, or following a court order, it could hunt for other types of material.

Apple said its processes were designed to prevent that occurrence from happening.

“CSAM detection for iCloud Photos is built so that the system only works with CSAM image hashes provided by NCMEC and other child safety organizations,” Apple said.

“There is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. As a result, the system is only designed to report photos that are known CSAM in iCloud Photos.

“In most countries, including the United States, simply possessing these images is a crime and Apple is obligated to report any instances we learn of to the appropriate authorities.”

On the prospect of being forced to add other hashes to its dataset, Apple referred to its past refusals to help US law enforcement.

“Apple will refuse any such demands,” it said. “We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future.

“Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.”

Apple claimed its system would prevent non-CSAM images being injected and flagged since the company does not add the set of hashes used for matching, and humans are involved in the verification process.

“The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under our design,” Apple said.

“As a result, system errors or attacks will not result in innocent people being reported to NCMEC.”

The iPhone maker reiterated its claims that the solution had privacy benefits over being able to scan images uploaded to it.

“Existing techniques as implemented by other companies scan all user photos stored in the cloud,” it said.

“This creates privacy risk for all users. CSAM detection in iCloud Photos provides significant privacy benefits over those techniques by preventing Apple from learning about photos unless they both match to known CSAM images and are included in an iCloud Photos account that includes a collection of known CSAM.”

Apple also said the feature would not run if users have iCloud Photos disabled and would not work on “private iPhone photo library on the device”.

On the scanning of images in iMessage, Apple expanded on the requirements for parents to be alerted once a family group is created and parents opt-in.

“For child accounts age 12 and younger, each instance of a sexually explicit image sent or received will warn the child that if they continue to view or send the image, their parents will be sent a notification. Only if the child proceeds with sending or viewing an image after this warning will the notification be sent,” it said.

“For child accounts age 13-17, the child is still warned and asked if they wish to view or share a sexually explicit image, but parents are not notified.”

Apple said it was looking at adding “additional support to Siri and Search to provide victims — and people who know victims — more guidance on how to seek help”.

Although the CSAM system is currently limited to the US, Cupertino could soon be facing pressure from Canberra to bring it to Australia.

On Monday, the government unveiled a set of rules for online safety that will cover social media, messaging platforms, and any relevant electronic service of any kind.

The provider is expected to minimise the availability of cyberbullying material targeted at an Australian child, cyber abuse material targeted at an Australian adult, a non-consensual intimate image of a person, class 1 material, material that promotes abhorrent violent conduct, material that incites abhorrent violent conduct, material that instructs in abhorrent violent conduct, and material that depicts abhorrent violent conduct.

The expectations also boast additional expectations, such as that the provider of the service will take reasonable steps to proactively minimise the extent to which material or activity on the service is or may be unlawful or harmful.

Australia’s eSafety Commissioner will have the power to order tech companies to report on how they are responding to these harms and issue fines of up to AU$555,000 for companies and AU$111,000 for individuals if they don’t respond.

Related Coverage


Source: Information Technologies - zdnet.com

Best cable internet provider 2021: Top picks compared

KT sees Q2 profit jump 38% on back of 5G and content