in

New Zealand introduces Bill to block violent extremist content

beehive-cropped.jpg

New Zealand ‘Beehive’

Image: Chris Duckett/ZDNet

The New Zealand government has introduced a Bill that proposes to block violent extremist content, introduce criminal offences, allow the ordering of take-down notices, and would hand the power to a chief censor to make immediate decisions on what material should be blocked.

The objective of the Films, Videos, and Publications Classification (Urgent Interim Classification of Publications and Prevention of Online Harm) Amendment Bill is to update the Films, Videos, and Publications Classification Act 1993 to allow for urgent prevention and mitigation of harms caused by objectionable publications.

The Bill mostly applies to online publications and provides additional regulatory tools to “manage harms caused by content that is livestreamed or hosted by online content hosts”.

Once legislated, the live-streaming of objectionable content will be a criminal offence. As any digital reproduction of a livestream is considered a recording, publications hosting a non-real-time video are already subject to existing provisions in the Act.

The criminal offence of livestreaming objectionable content only applies to the individual or group livestreaming the content. The Bill notes it does not apply to the online content hosts that provide the online infrastructure or platform for the livestream.

Under the Bill, a chief censor will be given powers to make immediate interim classification assessments of any publication in situations where the sudden appearance and viral distribution of objectionable content is injurious to the public good.

The interim decision will be in place for a maximum of 20 business days, allowing the chief censor to roll the classification back.

The interim decision will also apply to all publications covered by the Act, not just online ones.

The Bill also authorises an Inspector of Publications to issue a take-down notice for objectionable online content. They will be issued to an online content host and will direct the removal of a specific link to make it no longer viewable in New Zealand.

Failure to comply could see an online content host subject to civil pecuniary penalties.

“It is intended (but not required by the Bill) that the authority to issue a take-down notice will only be exercised in situations where other options for seeking the removal of objectionable content online have proven ineffective,” the Bill explained.

“The current collaborative practice of requesting online content hosts to voluntarily remove identified objectionable content will continue to be the first and preferred approach.

“Under the Bill, a civil pecuniary penalty will be imposed on online content hosts that do not comply with an issued take-down notice in relation to objectionable online content. This change will bring online content hosts in line with the expectations of businesses operating in New Zealand as they relate to physical analogue content classified as objectionable.”

Under the Bill, the “safe harbour” provisions present in the Harmful Digital Communications Act 2015 (HDC Act) will not apply to objectionable online content.

As detailed in the Bill, section 24 of the HDC Act states that online content hosts cannot be charged under New Zealand law for hosting harmful content on their platforms, if they follow certain steps when a complaint is made.

“This creates the potential for exemption for online content hosts from any criminal or civil liability if they break the law under the Act, (which is concerned with more serious content), but follow steps outlined in the HDC Act,” it states.

As a result, under the Bill, section 24 of the HDC Act would not apply to the operation of the Act.

This would mean enforcing any new offence or modified offences in the Act would not be limited by the HDC Act safe harbour provisions for online content hosts.

“It would ensure online content hosts can be prosecuted for hosting objectionable content if they are liable for doing so,” the Bill reads.

The Bill also allows for future content-blocking mechanisms to be introduced. It provides the government with explicit statutory authority to explore and implement such mechanisms through regulations, the Bill explains.

Currently, the only government-backed web filter is designed to block child sexual exploitation material and it is a voluntary mechanism operating at the internet service provider (ISP) level.

The move follows the devastating terrorist attack in Christchurch in March 2019.

In the immediate wake of the attack, a video of what had occurred was viewed around 4,000 times online before it was finally reported and taken down after being live for 29 minutes.

Despite its removal, approximately 1.5 million copies of the video sprung up on the network in the first 24 hours after the attack. However, only approximately 300,000 copies were published as over 1.2 million videos were blocked at upload.

Over on YouTube, a copy of the video was uploaded once every second during the first 24 hours following the terrorist attack.

Speaking previously, New Zealand Privacy Commissioner John Edwards said Facebook knew of the potential for its service to be used in this way before it launched it.

“It knew, and failed to take steps to prevent its platform, and audience and technology from being used in that way,” he said.

“It was predictable and predicted. And the company responsible was silent — confident that the protection afforded by its home jurisdiction would shield it from liability everywhere.”

Edwards is of the belief that digital platforms need to adapt to the jurisdictions in which they operate, not the other way around.

He also previously labelled the social network as morally bankrupt pathological liars.

RELATED COVERAGE

NZ Privacy Commissioner: Facebook knew what could be streamed on its platform

John Edwards said digital platforms need to adapt to the jurisdictions in which they operate, and take steps to prevent their platform, and audience and technology, from being used in such a way as was seen in Christchurch.

Facebook, Microsoft, Twitter, YouTube overhaul counter-terrorism efforts for Christchurch Call

A new crisis response protocol has also been launched for the 48 countries, three international organisations, and eight tech firms that are members of the Call.

ISPs to continue blocking graphic violent content in Australia

The new protocol positions ISPs to block websites that host graphic material, such as a terrorist act or violent crime, as part of efforts to ‘stem the risk of its rapid spread as an online crisis event unfolds’.


Source: Information Technologies - zdnet.com

ACCC report and COVID-19 highlight how CVC is an artificial handbrake on the NBN

Turla hacker group steals antivirus logs to see if its malware was detected