in

Google joins call for clarification on much of Australia's 'rushed' Online Safety Bill

Communications Minister Paul Fletcher last week put forward Australia’s new Online Safety Bill, which the government touted would further empower the eSafety Commissioner to request the removal of harmful material from websites and social media platforms, as well as introduce minimum standards for service providers to comply with.

The Online Safety Bill 2021 entered Parliament on Wednesday, eight business days after consultation on the draft legislation closed. Submissions made to the draft consultation are yet to be released, but Fletcher said it has received 370 submissions.

The Bill is before the House of Representatives and was referred to the Senate Standing Committees on Environment and Communications last Thursday. Submissions to the committee close on Tuesday — three business days after it was referred — with a report from the committee due on March 11, which is two weeks after the Bill was introduced.

The Bill contains six key priority areas: A cyberbullying scheme to remove material that is harmful to children; an adult cyber abuse scheme to remove material that seriously harms adults; an image-based abuse scheme to remove intimate images that have been shared without consent; basic online safety expectations for the eSafety Commissioner to hold services accountable; an online content scheme for the removal of “harmful” material through take-down powers; and an abhorrent violent material blocking scheme to block websites hosting abhorrent violent material.  

The committee has made a handful of submissions to its speedy inquiry available, including from Google Australia [PDF], which re-submitted the latest copy it sent to the draft consultation, given the “abbreviated timetable for this inquiry”.

Google raised concerns that the schemes would appear to apply to other sorts of services, such as messaging services, email, application stores, and business-to-business services that serve as providers for other hosting services.

“Therefore, compliance with certain obligations contained within the Bill will be challenging if not impossible for Google’s Cloud business due to technical limitations on how Google can and should moderate business client content,” it wrote. “Similar challenges would exist within, for instance, app distribution platforms like Google Play. There, too, the app platform operator does not have the ability to remove individual pieces of content from within an app.”

Among many other concerns, it has also taken issue with the Bill’s defined takedown period, which proposes to halve the current 48-hour period to 24 hours.

It said specifying an exact turnaround time, regardless of case complexity, would provide an incentive for companies to over-remove, thereby silencing political speech and user expression.

Electronic Frontiers Australia (EFA) is similarly concerned with the Bill. It said it was deeply troubled with the rush to accumulate new power concentrated in few hands and subject to little oversight or review.

“Authorities’ failure to enforce existing laws is frequently used to justify new powers that can be used ‘more efficiently’ which in practice means it will be done with less oversight and with fewer safeguards against abuse,” a submission penned by EFA board member and PivotNine founder and chief analyst Justin Warren said.

“Power over others should be difficult to use. This difficulty provides an inbuilt safeguard against abuse which is necessary because all power is abused, sooner or later.

“Australia is rushing to construct a system of authoritarian control over the population that should not be welcomed by a liberal democracy. It is leading Australia down a very dark path.”

Among other recommendations, the EFA asked the Bill’s introduction be delayed until after a federal enforceable human rights framework is introduced into Australian law.

Part of the Bill provides that the eSafety Commissioner may obtain information about the identity of an end-user of a social media service, a relevant electronic service, or designated internet service; another part also provides the commissioner with investigative powers, which includes a requirement that a person to provide “any documents in the possession of the person that may contain information relevant”.

As a result, the Australian Digital Rights Watch is concerned that it is possible the commissioner’s information-gathering and investigative powers would extend to encrypted services.

It has asked for additional clarification of the scope of these powers, along with a clear indication that providers are not expected to comply with a notice if it would require them to decrypt private communications channels or build systemic weaknesses to comply.

Making its views on the Bill public via its own website, Digital Rights Watch said the Bill introduces provisions for powers that are likely to undermine digital rights and exacerbate harm for vulnerable groups.

The online content scheme, Digital Rights Watch said, is likely to cause significant harm to those who work in the sex industry, including sex workers, pornography creators, online sex-positive educators, and activists.

The abhorrent content blocking scheme, which comes in direct response to the Christchurch terrorist attack, is considered overly simplistic by the group.

“In some circumstances, violence captured and shared online can be of vital importance to hold those in power accountable, to shine the light on otherwise hidden human rights violations, and be the catalyst for social change,” it wrote, pointing specifically to the video of George Floyd’s death.

“Simply blocking people from seeing violent material does not solve the underlying issues causing the violence in the first place and it can also lead to the continuation of violence behind closed doors, out of sight from those who might seek accountability. It is essential that this scheme not be used to hide state use of violence and abuses of human rights.”

The organisation said when automated processes such as AI are used to determine which content is or isn’t harmful, it has been shown to disproportionately remove some content over others, penalising Black, Indigenous, fat, and LGBTQ+ people.  

“While the goal of minimising online harm for children is vital to our communities, we must acknowledge that policing the internet in such broad and simplistic ways will not guarantee us safety and will have overbroad and lasting impacts across many different spaces,” Digital Rights Watch said.

Submissions close today and a hearing is scheduled for the committee on Friday.

HERE’S MORE


Source: Information Technologies - zdnet.com

Sharing stories of resilience

Researchers introduce a new generation of tiny, agile drones