More stories

  • in

    Oxfam Australia supporters embroiled in new data breach

    Oxfam Australia has confirmed a data breach after a database belonging to the organization was leaked on an underground forum. 

    After being made aware of a suspected security incident by Bleeping Computer, the charity’s Australian arm has now confirmed that supporters of the charity have been impacted. 
    A threat actor was attempting to sell a database containing Oxfam Australia records on an underground forum and this information appears to have subsequently been leaked in February. 
    The records have been added to Have I Been Pwned, a search engine for users to see if their information has been leaked in data breaches. According to HIBP, 1.8 million unique email addresses, names, phone numbers, physical addresses, genders, and dates of birth were included — alongside partial credit card data in a small number of cases. 
    Donation histories may have also been exposed. 
    In a statement concerning the data breach, Oxfam Australia said a database was compromised on January 20, 2021, and the organization was made aware of the issue on January 27. 
    “The database includes information about supporters who may have signed a petition, taken part in a campaign, or made donations or purchases through our former shops,” the charity said. 

    The group, however, will not say exactly how many individuals have been affected. 
    Oxfam Australia has notified the Office of the Australian Information Commissioner (OAIC) and Australian Cyber Security Centre (ACSC). Impacted supporters will also be contacted. 
    No account passwords are thought to have been compromised and so the charity says it will “not be asking supporters to change their password.” 
    However, as is the case with any data breach, it is recommended that users do so anyway in the interest of their personal security. If the same password is in use elsewhere, these account credentials should also be changed. 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Google addresses customer data protection, security in Workspace

    Google has outlined how the company handles customer data in response to a Dutch data protection assessment. 

    Launched in October, Google Workspace is an enterprise suite for applications including Gmail, Meet, Drive, and Sheets, software that can be useful for businesses currently adopting work from home or hybrid workplace models. 
    A Data Protection Impact Assessment (DPIA) was recently published by Dutch data protection authorities outlining comparisons between data handling in Google Workspace. 
    The DPIA included ten original ‘risk’ factors to government agencies adopting Google Workspace, citing issues including a lack of transparency concerning the purposes behind processing both customer and diagnostic data; potential legal gray areas surrounding both the tech giant and government bodies acting as data controllers or processors, “privacy-unfriendly” default settings, and potential spill-overs between ‘one-account’ users in personal and enterprise settings. 
    On Monday, Google Cloud VP of EMEA South, Samuel Bonamigo, said that in response to the DPIA and a separate assessment of and Google Workspace for Education delivered to the Dutch government, Google “welcomes the opportunity to demonstrate our commitment to privacy and security.”
    Google is in discussion with the Dutch government over the concerns highlighted, but wants to emphasize that Workspace solutions have been designed “to secure and protect the privacy of our customers’ data.”
    “Our cloud is designed to empower European organizations’ strict security and privacy requirements and expectations,” Google says. “We adhere to regulatory and compliance requirements to protect our customers’ data. And we believe that it is deeply important for us to be transparent about our products and our practices.”

    Google says that user or service data is not used for targeted ads or creating ad profiles, and ads are not shown in Workspace and Workspace for Education Core Services, which are the premium versions of existing tools. Cloud customer data is also only processed based on customer agreements and is kept in the control of the user, the company says. 
    Google has also created the Google Cloud Privacy Notice to outline how service data is processed, alongside a new Google Workspace for Education data protection implementation guide (.PDF). 
    “Our goal in addressing the DPIA is complete transparency for our customers, regulators, and policymakers on the open issues,” Google said. “We will continue to discuss the findings with the Dutch government in the next few months, with the goal of reaching an agreement that will lead to more choice for public sector organizations in the Netherlands and beyond.”
    In related news, Google has also updated Google Workspace with new features including new security access controls, the “Workspace Frontline” function for key workers that need to use their own devices to access corporate resources, improved endpoint management, and support for Google Assistant in Workspace. 
    On Monday, Google warned of an increase in bots targeting businesses, not only to perform Distributed Denial-of-Service (DDoS) assaults, but also the use of bots for content scraping and other forms of attack.
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    What hacking attacks can teach us about defending networks

    A water treatment plant fell victim to a hacker to the extent that the intruder was able to tamper with chemical levels and attempt to poison the drinking water supply.
    Nobody was harmed when the intruder interfered with the system at the water treatment facility in Oldsmar, Florida because the changes were spotted and the chemical levels reverted to normal, but the incident is a reminder to all organisations that networks must be secured against cyberattacks, especially if systems that manage physical capabilities can be remotely accessed and manipulated.

    More on privacy

    “What we can learn from this from a defender and an operator perspective as the utility is making sure that we’re securing credentials and, wherever possible, limiting the exposure of authentication portals to external entities and implementing multi-factor authentication wherever possible to really minimize the impact of credential guessing,” Joe Slowik, senior security researcher at DomainTools, told ZDNet Security Update.
    SEE: Cybersecurity: Let’s get tactical (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic) 
    Additional security capabilities, such as multi-factor authentication, can also provided an additional barrier to an attacker gaining access.
    In this instance, the attack was spotted after the intruder had attempted to manipulate industrial control systems, and in order to ensure the full security of an industrial network, there should be protections in place to detect any suspicious activity before attackers can attempt anything at all.
    That starts with knowing what’s on your network and being able to identify unexpected or unusual activity.

    “First and foremost, it’s just understanding your own attack surface; what do we have exposed? What are the possibilities for third parties or unwanted entities for accessing our environments. Knowing what those avenues are and, after they’ve been identified, securing them,” said Slowik
    “So that combination of understanding our own networks, hardening our networks, where possible, and then looking for attempts to subvert or break into these environments. It sounds fairly basic but that’s, at least where we need to get started for defending these environments,” he added.
    MORE ON CYBERSECURTY More

  • in

    Singapore issues FSI guidelines on managing remote work risks

    Singapore has released guidelines on heightened risks businesses in the financial services industry (FSI) now face as remote work practices take hold and how they can mitigate such risks. These include implementing safeguards in their outsourcing arrangements as well as security controls to combat data leaks and fraud. 
    The document aimed to outlined key risks associated with a remote workforce for FSI companies and drive the adoption of good practices to manage these risks, said the Monetary Authority of Singapore (MAS) and Association of Banks in Singapore (ABS) in a joint statement Tuesday. 
    A non-profit group representing interests of the FSI, ABS currently has a membership base of 154 local and overseas banks and financial institutions with local operations. Members of its Return to Onsite Operations Taskforce (ROOTS) — specifically, its Workstream 8 team that focused on remote work — had participated in the establishment of the document, including DBS Bank, Standard Chartered Bank, Barclays Bank, Bank of China, and Bank of America. 

    Global pandemic opening up can of security worms
    Caught by the sudden onslaught of COVID-19, most businesses lacked or had inadequate security systems in place to support remote work and now have to deal with a new reality that includes a much wider attack surface and less secured user devices.
    Read More

    “Remote working requires changes to policies and operational processes, some of which could lead to new risks and risk management challenges,” they said. With organisations expected to extend remote work arrangements and adopt hybrid work models in future, financial institutions would have to remain vigilant and take preemptive steps to manage the risks arising from this work environment.
    In particular, the document highlighted 10 key areas financial institutions should review, such as assessing changes to outsourcing and third-party vendors’ risk profiles amidst the new work environment including their remote working controls and operational resiliency. 
    “Vendors’ infrastructure and controls, including business continuity plans, may not be as robust as the financial institutions’ to allow them to fully manage remote working risks [and] this translates to heightened risks for financial institutions, especially if vendors have access to sensitive information, client data, or connectivity to the financial institutions’ systems, or provide critical services to financial institutions,” the report noted. 
    In addition, vendor services previously provided on-site at the financial institutions’ premises, such as IT development and support, would no longer be under close supervision with remote working.  This could lead to higher error rates or delays in service delivery. in its place, financial institutions might conduct alternative procedures such as desktop or virtual reviews, which generally relied more on vendors’ attestations. These were less effective in detecting risk issues, including weaknesses in vendors’ infrastructure, controls, and operational resiliency. 

    Financial institutions should assess such changes and roll out safeguards and contingency plans to ensure service continuity, the document recommended. 
    Organisations also should review the risks and implications of data loss when identifying activities that could be carried out remotely, and put in preventive and detection controls to address these risks. In addition, cybersecurity controls should be in place to ensure employees’ remote working infrastructure, including personal devices, were secured. 
    “To facilitate remote working, financial institutions may have amended information governance policies to allow staff to access customer and other sensitive information when they are working remotely, [where] staff could previously only access such information within the office premises,” the report stated.
    Enabling employees to access customer and other sensitive data remotely heightened inherent risks of data leaks, for instance, through eavesdropping amongst family members, employees browsing online on corporate devices while bypassing corporate proxy or gateway, and staff forwarding sensitive data to personal devices. 
    They should continue to have robust technology risk management practices to manage hardware and software deployed to support large-scale remote working, MAS said. 
    Furthermore, financial institutions would need to keep updated on fraud typologies from remote work environments and roll out the necessary countermeasures, as well as implement guidelines to identify situations where in-person meetings, site visits, and verification against original documents were needed. 
    MAS’ deputy managing director of financial supervision Ong Chong Tee said: “Financial institutions in Singapore have swiftly adapted to remote working and split-team arrangements in response to COVID-19. The operational resilience of our financial institutions during this period reflects the soundness of their business continuity management plans. It also underscores the importance of regular tests through internal drills and industry-wide exercises jointly organised by the MAS and the financial industry.”
    RELATED COVERAGE More

  • in

    Twitter’s new strike system will target prolific COVID-19 fake information spreaders

    Twitter is set to introduce a strike system to remove repeat spreaders of COVID-19 vaccine misinformation from the platform. 

    On Monday, Twitter said that alongside removing thousands of tweets and examining over 11.5 million accounts linked to fake information on the microblogging platform, the company will now start applying labels to tweets “that may contain misleading information about COVID-19 vaccines.”
    This system is similar to one already imposed by Facebook, which has also adopted a targeted misinformation approach based on user locations and measuring attitudes to topics including vaccinations and mask-wearing worldwide. 
    Twitter will first use human employees to make the decisions over whether tweets violate company policy, and these assessments will then be used to train automated tools and algorithms to detect misinformation. 
    The firm intends to eventually use “both automated and human review to address content that violates our COVID-19 vaccine misinformation rules.”
    Persistent spreads of fake COVID-19 vaccine content will receive a ‘strike’. While this won’t deter bots, Twitter hopes the system will “educate” users “on why certain content breaks our rules so they have the opportunity to further consider their behavior and their impact on the public conversation.”
    Twitter will alert users when they receive a strike, and after two, a 12-hour account lock will be applied. After three strikes, another 12-hour ban will be imposed, and after four, users will be unable to access their account for a week. 

    Five strikes or more will be punished by permanent suspension. However, users do retain the right to appeal. 
    In addition to introducing the strike system, Twitter has debuted a COVID-19 search prompt to push up results from official sources including health organizations; free non-profit advertising, and ongoing collaboration with the World Health Organization (WHO). 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Australia's new 'hacking' powers considered too wide-ranging and coercive by OAIC

    The Office of the Australian Information Commissioner (OAIC) has labelled the powers given to two law enforcement bodies within three new computer warrants as “wide-ranging and coercive in nature”.
    The Surveillance Legislation Amendment (Identify and Disrupt) Bill 2020, if passed, would hand the Australian Federal Police (AFP) and the Australian Criminal Intelligence Commission (ACIC) the new warrants for dealing with online crime.
    The first of the warrants is a data disruption one, which according to the Bill’s explanatory memorandum, is intended to be used to prevent “continuation of criminal activity by participants, and be the safest and most expedient option where those participants are in unknown locations or acting under anonymous or false identities”.
    The second is a network activity warrant that would allow the AFP and ACIC to collect intelligence from devices that are used, or likely to be used, by those subject to the warrant.
    The last warrant is an account takeover warrant that would allow the agencies to take control of an account for the purposes of locking a person out of the account.
    See also: Intelligence review recommends new electronic surveillance Act for Australia
    “The OAIC acknowledges the importance of law enforcement agencies being authorised to respond to cyber-enabled and serious crime. However, the Bill’s proposed powers are wide-ranging and coercive in nature,” it wrote [PDF].

    It said, for example, data disruption and network activity warrants may authorise entering specified premises, removing computers or data, and intercepting communications. Network activity warrants, OAIC said, can authorise the use of surveillance devices, and both data disruption and network activity warrants may authorise the concealment of certain activities done under these warrants.
    “These powers may adversely impact the privacy of a large number of individuals, including individuals not suspected of involvement in criminal activity, and must therefore be subject to a careful and critical assessment of their necessity, reasonableness, and proportionality,” its submission to the Parliamentary Joint Committee on Intelligence and Security (PJCIS) continued.
    “Further, given the privacy impact of these law enforcement powers on a broad range of individuals and networks, they should be accompanied by appropriate privacy safeguards.”
    The OAIC believes the Bill requires further consideration to better ensure that any adverse effects on the privacy of individuals which result from these coercive powers are minimised, and that additional privacy protections are included in the primary legislation.
    It also wants the Bill amended to require issuing authorities to consider the impact of the warrants on the privacy of any individual when determining applications for data disruption warrants and network activity warrants, in addition to account takeover warrants.
    Likewise, the OAIC has asked for a limit to the number of warrant extensions that can be sought in respect of the same or substantially the same circumstances and that the issuing authority be required to consider the privacy impact on any individual arising from the extension of the warrant to ensure that the potential law enforcement benefits are necessary and proportionate to this impact.
    Elsewhere, the commissioner has asked the Bill be amended to only allow for judicial oversight and authorisation of warrants issued under it.
    The chief officer of the AFP or ACIC may apply for a network activity warrant if that officer suspects on reasonable grounds that a group of individuals constitutes a “criminal network of individuals”. The OAIC believes the Bill’s definition of a criminal network of individuals has the potential to include a significant number of individuals, including third parties not the subject or subjects of the warrant who are only incidentally connected to the subject or subjects of the warrant.
    “The seriousness of this impact upon privacy requires further mitigation with commensurate safeguards,” it said. “The OAIC recommends amending the Bill to narrow the definition of ‘criminal network of individuals’.”
    Among its recommendations is the mandate for the information within denied warrants to be destroyed, as well as a requirement on agencies to consider the utility of the collected information and take active steps to destroy it when it is no longer necessary for the purposes of criminal investigations.
    MORE ON THE BILL More

  • in

    Google joins call for clarification on much of Australia's 'rushed' Online Safety Bill

    Communications Minister Paul Fletcher last week put forward Australia’s new Online Safety Bill, which the government touted would further empower the eSafety Commissioner to request the removal of harmful material from websites and social media platforms, as well as introduce minimum standards for service providers to comply with.

    The Online Safety Bill 2021 entered Parliament on Wednesday, eight business days after consultation on the draft legislation closed. Submissions made to the draft consultation are yet to be released, but Fletcher said it has received 370 submissions.
    The Bill is before the House of Representatives and was referred to the Senate Standing Committees on Environment and Communications last Thursday. Submissions to the committee close on Tuesday — three business days after it was referred — with a report from the committee due on March 11, which is two weeks after the Bill was introduced.
    The Bill contains six key priority areas: A cyberbullying scheme to remove material that is harmful to children; an adult cyber abuse scheme to remove material that seriously harms adults; an image-based abuse scheme to remove intimate images that have been shared without consent; basic online safety expectations for the eSafety Commissioner to hold services accountable; an online content scheme for the removal of “harmful” material through take-down powers; and an abhorrent violent material blocking scheme to block websites hosting abhorrent violent material.  
    The committee has made a handful of submissions to its speedy inquiry available, including from Google Australia [PDF], which re-submitted the latest copy it sent to the draft consultation, given the “abbreviated timetable for this inquiry”.
    Google raised concerns that the schemes would appear to apply to other sorts of services, such as messaging services, email, application stores, and business-to-business services that serve as providers for other hosting services.
    “Therefore, compliance with certain obligations contained within the Bill will be challenging if not impossible for Google’s Cloud business due to technical limitations on how Google can and should moderate business client content,” it wrote. “Similar challenges would exist within, for instance, app distribution platforms like Google Play. There, too, the app platform operator does not have the ability to remove individual pieces of content from within an app.”

    Among many other concerns, it has also taken issue with the Bill’s defined takedown period, which proposes to halve the current 48-hour period to 24 hours.
    It said specifying an exact turnaround time, regardless of case complexity, would provide an incentive for companies to over-remove, thereby silencing political speech and user expression.
    Electronic Frontiers Australia (EFA) is similarly concerned with the Bill. It said it was deeply troubled with the rush to accumulate new power concentrated in few hands and subject to little oversight or review.
    “Authorities’ failure to enforce existing laws is frequently used to justify new powers that can be used ‘more efficiently’ which in practice means it will be done with less oversight and with fewer safeguards against abuse,” a submission penned by EFA board member and PivotNine founder and chief analyst Justin Warren said.
    “Power over others should be difficult to use. This difficulty provides an inbuilt safeguard against abuse which is necessary because all power is abused, sooner or later.
    “Australia is rushing to construct a system of authoritarian control over the population that should not be welcomed by a liberal democracy. It is leading Australia down a very dark path.”
    Among other recommendations, the EFA asked the Bill’s introduction be delayed until after a federal enforceable human rights framework is introduced into Australian law.
    Part of the Bill provides that the eSafety Commissioner may obtain information about the identity of an end-user of a social media service, a relevant electronic service, or designated internet service; another part also provides the commissioner with investigative powers, which includes a requirement that a person to provide “any documents in the possession of the person that may contain information relevant”.
    As a result, the Australian Digital Rights Watch is concerned that it is possible the commissioner’s information-gathering and investigative powers would extend to encrypted services.
    It has asked for additional clarification of the scope of these powers, along with a clear indication that providers are not expected to comply with a notice if it would require them to decrypt private communications channels or build systemic weaknesses to comply.
    Making its views on the Bill public via its own website, Digital Rights Watch said the Bill introduces provisions for powers that are likely to undermine digital rights and exacerbate harm for vulnerable groups.
    The online content scheme, Digital Rights Watch said, is likely to cause significant harm to those who work in the sex industry, including sex workers, pornography creators, online sex-positive educators, and activists.
    The abhorrent content blocking scheme, which comes in direct response to the Christchurch terrorist attack, is considered overly simplistic by the group.
    “In some circumstances, violence captured and shared online can be of vital importance to hold those in power accountable, to shine the light on otherwise hidden human rights violations, and be the catalyst for social change,” it wrote, pointing specifically to the video of George Floyd’s death.
    “Simply blocking people from seeing violent material does not solve the underlying issues causing the violence in the first place and it can also lead to the continuation of violence behind closed doors, out of sight from those who might seek accountability. It is essential that this scheme not be used to hide state use of violence and abuses of human rights.”
    The organisation said when automated processes such as AI are used to determine which content is or isn’t harmful, it has been shown to disproportionately remove some content over others, penalising Black, Indigenous, fat, and LGBTQ+ people.  
    “While the goal of minimising online harm for children is vital to our communities, we must acknowledge that policing the internet in such broad and simplistic ways will not guarantee us safety and will have overbroad and lasting impacts across many different spaces,” Digital Rights Watch said.
    Submissions close today and a hearing is scheduled for the committee on Friday.
    HERE’S MORE More

  • in

    SolarWinds security fiasco may have started with simple password blunders

    We still don’t know just how bad the SolarWinds security breach is. We do know over a hundred US government agencies and companies were cracked. Microsoft president Brad Smith said, with no exaggeration, that it’s “the largest and most sophisticated attack the world has ever seen,” with more than a thousand hackers behind it. But former SolarWinds CEO Kevin Thompson says it may have all started when an intern first set an important password to “‘solarwinds123.” Then, adding insult to injury, the intern shared the password on GitHub.

    ZDNet Recommends

    You can’t make this stuff up.
    Also: Best password manager in 2021
    Thompson told a joint US House of Representatives Oversight and Homeland Security Committees hearing that the password was “a mistake that an intern made. They violated our password policies and they posted that password on an internal, on their own private Github account. As soon as it was identified and brought to the attention of my security team, they took that down.”
    Rep. Katie Porter, Democrat from California, rejoined, “I’ve got a stronger password than ‘solarwinds123′ to stop my kids from watching too much YouTube on their iPad.”
    How long did it actually take SolarWinds to replace the lousy password? Too long. 
    While SolarWinds executives said it was fixed within days of its discovery, current SolarWinds CEO Sudhakar Ramakrishna confessed that the password has been in use by 2017. Vinoth Kumar, the security researcher who discovered the leaked password had said SolarWinds didn’t fix the issue until November 2019. 

    Almost two years is too long to leave an important password to go stale. You also have to wonder what an intern was doing setting a significant password in the first place.  
    While SolarWinds isn’t sure that this password is the hole in the dyke that Russian hackers used to flood into American systems, it’s a safe bet that a security culture that enabled such a basic mistake couldn’t have helped.
    Also: Better than the best password: How to use 2FA to improve your security
    Looking ahead, Smith suggested to the US Senate that in the future the Federal government should impose a “notification obligation on entities in the private sector.” All too often no one knows about corporate security breaches until they’ve blown up the way SolarWinds’ failure did. Smith agreed that isn’t “a typical step when somebody comes and says, ‘Place a new law on me,'” but “I think it’s the only way we are going to protect the country.”   In the meantime, as security company FireEye CEO Kevin Mandia said at the House hearing, “The bottom line: We may never know the full range and extent of the damage, and we may never know the full range and extent as to how the stolen information is benefiting an adversary.”
    That said, Mandia added, “I’m not convinced compliance in any standards regulation or legislation would stop Russian Foreign Intelligence Service from successfully breaching the organization.” 
    Related Stories: More