HOTTEST

Image: Asha Barbaschow/ZDNet The Australian Department of Defence has said it began assessing the network of Defence Force Recruiting (DFR) on December 24, a week after Citrix put out a vulnerability notice impacting its Application Delivery Controller (ADC). The possibility of the vulnerability being used led to it being quarantined for ten days over February. […] More

Meta, the parent company of Facebook, has pushed back its plans to enable end-to-end encryption (E2EE) as the default on Facebook Messenger and Instagram until 2023.
Social Networking
Messenger and Instagram chats are on the same platform these days, reflecting the company’s push to unify its messaging products and aligning them with WhatsApp, where E2EE is the default, based on Signal’s E2EE protocol. In April, Facebook said that Messenger and Instagram direct messages wouldn’t be “fully end-to-end encrypted until sometime in 2022 at the earliest”.E2EE should mean that even Facebook employees with physical access to its hardware in data centers can’t access the content of messages, preventing the firm and employees from producing some evidence even when ordered by a court to do so. Facebook rolled out E2EE for WhatsApp in 2016 using the protocol developed by messaging platform Signal, which gained users after Facebook announced plans to share user data between WhatsApp and Facebook to expand its offering for businesses on both platforms. Antigone Davis, Meta’s global head of safety, detailed Meta’s encryption challenges in an article for the UK’s The Telegraph. “There’s an ongoing debate about how tech companies can continue to combat abuse and support the vital work of law enforcement if we can’t access your messages,” wrote Davis.
“We believe people shouldn’t have to choose between privacy and safety, which is why we are building strong safety measures into our plans and engaging with privacy and safety experts, civil society and governments to make sure we get this right.”Davis said Meta has three approaches to the question of safety, including detecting suspicious patterns like someone setting up multiple new profiles and messaging strangers. She said this system is in place and that “we’re working to improve its effectiveness.”The second is giving Instagram users the ability to filter direct messages based on offensive words. The third is encouraging people to report harmful messages. She goes on to point out that law enforcement still has access to metadata for criminal investigations. “Even with billions of people already benefiting from end-to-end encryption, there is more data than ever for the police to use to investigate and prosecute criminals, including phone numbers, email addresses, and location data,” she notes. “Our recent review of some historic cases showed that we would still have been able to provide critical information to the authorities, even if those services had been end-to-end encrypted,” wrote Davis. “While no systems are perfect, this shows that we can continue to stop criminals and support law enforcement.””We’re taking our time to get this right, and we don’t plan to finish the global rollout of end-to-end encryption by default across all our messaging services until sometime in 2023,” Davis said. The US, UK and Australia have in 2019 called on Facebook to create a create a backdoor to access encrypted messages. Facebook has resisted these calls. Facebook CEO and co-founder Mark Zuckerberg announced the name change to Meta in November, a month after a former employee Frances Haugen went public with allegations the company’s algorithms are used to spread harmful content. Meta and its brands are facing new laws in the UK that could require them to protect users from harmful content. More

Image: Getty Images/iStockphoto
The Russian-backed group, Nobelium, that gained notoriety for the SolarWinds supply chain hack — an attack that saw a backdoor planted in thousands of organisations before cherrypicking nine US federal agencies and about 100 US companies to actually compromise and steal information from — has now hit Microsoft itself. In an update on Friday, Microsoft said it found “information-stealing malware” on the machine of one of its support agents that had access to “basic account information for a small number of our customers”. “The actor used this information in some cases to launch highly-targeted attacks as part of their broader campaign. We responded quickly, removed the access and secured the device,” the company said. “The investigation is ongoing, but we can confirm that our support agents are configured with the minimal set of permissions required as part of our Zero Trust ‘least privileged access’ approach to customer information. We are notifying all impacted customers and are supporting them to ensure their accounts remain secure.” Microsoft recommended using multi-factor authentication and zero trust architectures to help protect environments. Redmond recently warned that Nobelium was conducting a phishing campaign impersonating USAID after it managed to take control of a USAID account on the email marketing platform Constant Contact. The phishing campaign targeted around 3,000 accounts linked to government agencies, think tanks, consultants, and non-governmental organisations, Microsoft said.In its Friday update, Microsoft said it has continued to see “password spray and brute-force attacks”. “This recent activity was mostly unsuccessful, and the majority of targets were not successfully compromised — we are aware of three compromised entities to date,” it said. “All customers that were compromised or targeted are being contacted through our nation-state notification process.” Malware made its way through normal Microsoft driver signing process In a second Friday post, Microsoft admitted a malicious driver has managed to get signed by the software giant. “The actor’s activity is limited to the gaming sector specifically in China and does not appear to target enterprise environments. We are not attributing this to a nation-state actor at this time,” the company said. “The actor’s goal is to use the driver to spoof their geo-location to cheat the system and play from anywhere. The malware enables them to gain an advantage in games and possibly exploit other players by compromising their accounts through common tools like keyloggers.” As a result of the incident, Microsoft said it would be “refining” its policies, validation, and signing processes. Microsoft added the drivers would be blocked through its Defender applications. While Microsoft called the malware a driver, Karsten Hahn of G Data, which discovered the Netfilter malware, labelled it as a rootkit. “At the time of writing it is still unknown how the driver could pass the signing process,” he wrote. Hahn said searching Virustotal produced sample signatures going back to March. Netfilter has an update mechanism after hitting a particular IP address, installs a root certificate, and updates proxy settings, Hahn said. Microsoft said for the attack to work, the attackers must have admin privileges for the installer to update registry keys and install the driver, or convince the user to do it themselves. Related Coverage More

In a submission to the Senate Select Committee and its inquiry into Foreign Interference Through Social Media, controversial video-sharing app TikTok has taken the opportunity to address what it has labelled misinformation in regards to itself.
TikTok, owned by China’s ByteDance Ltd, is currently offered in “all major markets” except China, where the company offers a different short-form video app called Douyin, and Hong Kong, following the introduction of its new security law.
It is currently banned in India and was previously on the US’ chopping block when President Donald Trump issued executive orders to ban the app. TikTok received approval to operate in the US, however, when the app’s US footprint was sold to Oracle and Walmart.
Read more: What TikTok’s big deal means for cloud, e-commerce: TikTok Global created with Oracle, Walmart owning 20%
The app was launched in May 2017 and its official launch in Australia occurred in May 2019.
TikTok said the personal data it collects from Australian users is stored on servers located in the United States and Singapore.
“We have strict controls around security and data access. As noted in our transparency reports, TikTok has never shared Australian user data with the Chinese government, nor censored Australian content at its request,” it wrote [PDF].
“We apply HTTPS encryption to user data transmitted to our data centres and we also apply key encryption to the most sensitive personal data elements. User data is only accessible by employees within the scope of their jobs and subject to internal policies and controls.”
The company said any legal requests from the Chinese government relating to Australian TikTok user data would need to go through the Mutual Legal Assistance Treaty (MLAT) process.
“The Chinese government or law enforcement would need to send the evidence disclosure request through the relevant MLAT process.”
If the data was stored in the United States, the US Department of Justice (DoJ) would be the appropriate body to consider the MLAT request.
“If the US DoJ approved the evidence request, the US DoJ would send the request on to us at TikTok. If the request from the US DoJ complied with our processes and legal requirements, we would provide the user data information to the US DoJ, who would in turn pass the data on to the Chinese government or law enforcement,” it said.
“To date, we have not received any MLAT requests in respect of Australian user data, nor have we received requests to censor Australian content from, the Chinese government.”
Prime Minister Scott Morrison in August said that he had a “good look” at TikTok and there was no evidence to suggest the misuse of any person’s data.
“We have had a look, a good look at this, and there is no evidence for us to suggest, having done that, that there is any misuse of any people’s data that has occurred, at least from an Australian perspective, in relation to these applications,” he told the Aspen Security Forum.
“You know, there’s plenty of things that are on TikTok which are embarrassing enough in public. So that’s sort of a social media device.”
Morrison said the same issues are present with other social media companies, such as Facebook.
“Enormous amounts of information is being provided that goes back into systems. Now, it is true that with applications like TikTok, those data, that data, that information can be accessed at a sovereign state level. That is not the case in relation to the applications that are coming out of the United States. But I think people should understand and there’s a sort of a buyer beware process,” the prime minister added.
“There’s nothing at this point that would suggest to us that security interests have been compromised or Australian citizens have been compromised because of what’s happening with those applications.”
TikTok said it understands that with “[its] success comes responsibility and accountability”.
“The entire industry has received scrutiny, and rightly so. Yet, we have received even more scrutiny due to the company’s origins,” it said.
“Whilst we don’t want TikTok to be a political football, we accept this scrutiny and embrace the challenge of giving peace of mind by providing even more transparency and accountability.”
See also: Countering foreign interference and social media misinformation in Australia
In its submission, TikTok outlined the steps it has taken in relation to COVID-19, such as removing content containing medical misinformation and also content that included false information that was “likely to stoke panic and consequently result in real world harm”.
The company added that it understood it has a responsibility to protect users from misleading information, educate on why it is inappropriate to post and spread misinformation, as well as encourage users to think twice about the information provided in any given post.
TikTok said it has also limited the distribution of conspiratorial content that may allege COVID-19 was intentionally developed by a person, group, or institution for nefarious purposes, and also removed content that suggests a certain race, ethnicity, gender, or any member of a protected group is more susceptible to have and/or spread coronavirus.
“In light of the pandemic and the serious risk it poses to public health, we are erring on the side of caution when reviewing reports related to misinformation that could cause harm to our community or to the larger public. This may lead to the removal of some borderline content,” it wrote.
TikTok said it is also continuing to invest in efforts to actively identify misinformation and to prevent inauthentic behaviour. It boasts a TikTok Transparency and Accountability Centre in Los Angeles, with another being built in Washington DC.
The app’s community guidelines also state that TikTok is not the place to post, share, or promote: Harmful or dangerous content, graphic or shocking content, discrimination or hate speech, nudity or sexual activity, child safety infringement, harassment or cyberbullying, intellectual property infringements, or impersonation, spam, scams, or other misleading content.
“We continue to consult with a wide range of industry experts, academics and civil society organisations to seek guidance on improving our policies,” it said.
“We welcome collaboration with Australian industry players and regulators. This includes working with the Australian Communications and Media Authority (ACMA), towards the development of a draft industry code of conduct on misinformation, which is due for release later this year.”
TikTok is due to appear before the committee on Friday. Labor previously said it wanted to ask TikTok how it approaches Australian privacy laws.
SEE ALSO More
Artie Beaty / Elyse Betters Picaro / ZDNETFollow ZDNET: Add us as a preferred source More
Internet of Things
Samsung Spotlights Next-generation IoT Innovations for Retailers at National Retail Federation’s BIG Show 2017
That’s Fantasy! The World’s First Stone Shines And Leads You to The Right Way
LG Pushes Smart Home Appliances To Another Dimension With ‘Deep Learning’ Technology
The Port of Hamburg Embarks on IoT: Air Quality Measurement with Sensors




