More stories

  • in

    Google issues Chrome update patching seven security vulnerabilities

    Image: Getty Images
    Google on Wednesday released version 90.0.4430.85 of the Chrome browser for Windows, Mac, and Linux. The release contains seven security fixes, including one for a zero-day vulnerability that was exploited in the wild.The zero-day, which was assigned the identifier of CVE-2021-21224, was described as a “type confusion in V8″.In an advisory penned by Chrome technical program manager Srinivas Sista, five vulnerabilities were detailed: CVE-2021-21222 heap buffer overflow in V8, CVE-2021-21223 integer overflow in Mojo, CVE-2021-21225 out of bounds memory access in V8, CVE-2021-21226 use after free in navigation, and CVE-2021-21224 type confusion in V8.”Google is aware of reports that exploits for CVE-2021-21224 exist in the wild,” he wrote.The advisory thanked five researchers for their contributions and added that its own ongoing security work was responsible for a wide range of fixes.MORE FROM CHROMEGoogle to shorten Chrome update cycle to four weeksIt will also lower the minimum price limit of Android apps, in-app purchases, and subscriptions in 20 markets.

    The good and the bad with Chrome web browser’s new security defaultsStarting with Chrome 90, you’ll automatically be directed to the secure version of any website. That’s good, but it’s not as good as you might believe.Google releases Chrome 90 with HTTPS by default and security fixesChrome 90 has arrived with new privacy features and fixes for 37 security flaws. More

  • in

    Internal Facebook email reveals intent to frame data scraping as ‘normalized, broad industry issue’

    An internal email accidentally leaked by Facebook to a journalist has revealed the firm’s intentions to frame a recent data scraping incident as “normalized” and a “broad industry issue.”

    Facebook has recently been at the center of a data scraping controversy. Earlier this month, Hudson Rock researchers revealed that information belonging to roughly 533 million users had been posted online, including phone numbers, Facebook IDs, full names, and dates of birth.  The social media giant confirmed the leak of the “old” data, which had been scraped in 2019. A functionality issue in the platform’s contact platform, now fixed, allowed the automatic data pillaging to take place.  The scraping and subsequent online posting of user data raised widespread criticism and on April 14, the Irish Data Protection Commission (DPC) said it planned to launch an inquiry to ascertain if GDPR regulations and/or the Data Protection Act 2018 have been “infringed by Facebook.”  Now, an internal email leaked to the media (Dutch article, translated) has potentially revealed how Facebook wishes to handle the blowback.  This month, Data News editor Pieterjan Van Leemputten sent several queries to Facebook requesting an update on the data scraping incident and further clarity concerning the breach timeline.  However, Facebook accidentally included the journalist in an internal emailed discussion thread.

    In the original emails sent to EMEA region PR staff, viewed by ZDNet and dated from April 8, Facebook’s team outlined an overall “long-term strategy” for dealing with coverage of data scraping incidents. “Assuming press volume continues to decline, we’re not planning additional statements on this issue,” the email reads. “Longer term, though, we expect more scraping incidents and think it’s important to both frame this as a broad industry issue and normalize the fact that this activity happens regularly.”  “To do this, the team is proposing a follow-up post in the next several weeks that talks more broadly about our anti-scraping work and provides more transparency around the amount of work we’re doing in this area,” the message continues. “While this may reflect a significant volume of scraping activity, we hope this will help to normalize the fact that this activity is ongoing and avoid criticism that we aren’t being transparent about particular incidents.”  A redacted portion of the email thread is shown below. 
    Pieterjan Van Leemputten
    The thread also includes lists of existing global coverage surrounding the story, such as by ZDNet, CNET, Graham Cluley, Reuters, The Guardian, and The Wall Street Journal, to name a few; broadcast coverage, and tweets considered “notable,” as well as statistics on social conversion and mentions on Twitter. While describing overall coverage, the email says that publications “have offered more critical takes of Facebook’s response framing it as evasive, a deflection of blame and absent of an apology for the users impacted.” “These pieces are often driven by quotes from data experts or regulators, keen on criticizing the company’s response as insufficient of framing the company’s assertion that the information was already public as misleading,” the team added. “With regulators fully zeroed in on the issue, expect the steady drumbeat of criticism to continue in the press.” Update 13.52 BST: A Facebook spokesperson told ZDNet: “We are committed to continuing to educate users about data scraping. We understand people’s concerns, which is why we continue to strengthen our systems to make scraping from Facebook without our permission more difficult and go after the people behind it.  That’s why we devote substantial resources to combat it and will continue to build out our capabilities to help stay ahead of this challenge.”

    Previous and related coverageHave a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Lazarus hacking group now hides payloads in BMP image files

    The Lazarus group has tweaked its loader obfuscation techniques by abusing image files in a recent phishing campaign. 

    Lazarus is a state-sponsored advanced persistent threat (APT) group from North Korea.  Known as one of the most prolific and sophisticated APTs out there, Lazarus has been in operation for over a decade and is considered responsible for worldwide attacks including the WannaCry ransomware outbreak, bank thefts, and assaults against cryptocurrency exchanges.  South Korean organizations are consistent targets for Lazarus, although the APT has also been traced back to cyberattacks in the US and, more recently, South Africa.  In a campaign documented by Malwarebytes on April 13, a phishing document attributed to Lazarus revealed the use of an interesting technique designed to obfuscate payloads in image files.  The attack chain begins with a phishing Microsoft Office document (참가신청서양식.doc) and a lure in the Korean language. Intended victims are asked to enable macros in order to view the file’s content, which, in turn, triggers a malicious payload.  The macro brings up a pop-up message which claims to be an old version of Office, but instead, calls an executable HTA file compressed as a zlib file within an overall PNG image file. 

    During decompression, the PNG is converted to the BMP format, and once triggered, the HTA drops a loader for a Remote Access Trojan (RAT), stored as “AppStore.exe” on the target machine.   “This is a clever method used by the actor to bypass security mechanisms that can detect embedded objects within images,” the researchers say. “The reason is because the document contains a PNG image that has a compressed zlib malicious object and since it’s compressed it can not be detected by static detections. Then the threat actor just used a simple conversion mechanism to decompress the malicious content.” The RAT is able to link up to a command-and-control (C2) server, receive commands, and drop shellcode. Communication between the malware and C2 is base64 encoded and encrypted using a custom encryption algorithm that has previously been linked to Lazarus’ Bistromath RAT. In related news, Google’s Threat Analysis Group (TAG) warned earlier this month that North Korean threat actors are targeting security researchers across social media. First spotted in January, the scheme now includes a web of sham profiles, browser exploits, and a fake offensive security company.

    Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Multi-factor authentication: Use it for all the people that access your network, all the time

    The most common way cyber-criminal hackers break into enterprise networks is by stealing or guessing usernames and passwords. The attacks, whether the goal is stealing information, executing a ransomware attack or any other means of cybercrime represent a major risk to organisations of all kinds – but there’s one thing that information security teams can do to dramatically help protect the network and its users from cyber criminals. “You want to be using strong authentication for anyone that accesses your environment,” Ann Johnson, corporate vice president of security, compliance & identity business development at Microsoft told ZDNet Security Update.

    “We know that, 99% of hacks have some type of password element, however that password was stolen. Using strong authentication will at least give you a first line of defence against that,” she said, adding: “Use multi-factor authentication for 100% of the people that access your environment 100% of the time”. SEE: Network security policy (TechRepublic Premium) Providing employees with multi-factor authentication – which requires the user to confirm that it was really them who just tried to login into their account – helps boost cybersecurity in two ways. First, it makes it a lot more difficult for a cyber criminal to break into an account, even if they know the correct username and password. Second, if multi-factor authentication stops a login attempt not made by the user, it’s an indication of potentially suspicious activity that can serve as an alert about cyber criminals attempting to breach the network.

    Microsoft has previously said that multi-factor authentication works to such an extent that it prevents 99.9% of cyberattacks from breaching accounts. But cybersecurity isn’t something that should be passed onto end users – it’s important for organisations to have information security policies in place that will protect people from cyberattacks in the first place. One way of doing this is by applying a least privilege, zero trust model to the network, providing people with the access they need to do their jobs and nothing more. That prevents a cyberattack from taking control of a standard account then leveraging it to gain administrator privilege or move laterally to areas of the network that the employee doesn’t need access for their job – but that cyber criminals could exploit. SEE: Ransomware: Why we’re now facing a perfect storm That’s something that’s proved to be a difficult issue for many organisations over the past year as they have suddenly had to adapt to employees being forced to work remotely. Many employees have found themselves in difficult circumstances, sharing networks or devices with families that could allow attackers onto their device without them even knowing. “Employees may be sharing their device with their child who’s doing schooling and then malware could come in that way,” said Johnson. “So having least privilege on that device and having that device not be able to do anything but the minimum for the job is incredibly important. Your end users do not need admin privilege,” she added.

    MORE ON CYBERSECURITY More

  • in

    Facebook cracks down on posts urging violence, mockery ahead of Chauvin verdict in George Floyd case

    As the verdict looms in the trial of Derek Chauvin, Facebook has outlined steps to restrict content that could lead to violence.

    Chauvin, a former Minneapolis police officer, has taken the stand after being accused of having a part to play in George Floyd’s death by kneeling on his neck, an event on May 25, 2020, which sparked protests worldwide. Many who participated in the protests did so as part of Black Lives Matter, a movement against police brutality and racial inequality.   Closing arguments have concluded in Chauvin’s trial and the jury is now deliberating charges of third-degree murder, second-degree unintentional murder, and second-degree manslaughter. As the United States awaits the verdict, Facebook says the company is “preparing” for the aftermath — whatever the conclusion may be.”This means preventing online content from being linked to offline harm and doing our part to keep our community safe,” Monika Bickert, Vice President of Content Policy at Facebook, said in a blog post.  The social media giant says its team is “working around the clock” to monitor for “potential threats” on both Facebook and Instagram, “so we can protect peaceful protests and limit content that could lead to civil unrest or violence.”

    Given how emotive this case is and the global degree of attention, Facebook has to try and walk a fine line between protecting free speech and not being used as a conduit for hate, incitement, or the promotion of violence.  A particular area the company is focused on is any call-to-arms in Minneapolis, now considered a temporary “high-risk” location. If any content is found on the platform that urges violence in the area, Facebook will take it down — and this includes pages, groups, events, and accounts.  In addition, the social media giant says that it aims to protect Floyd’s family and Floyd’s memory by preventing abuse, including the deletion of any content that “praises, celebrates, or mocks George Floyd’s death.” Facebook has also highlighted the different levels of protection for those included in the trial. The firm considers Chauvin a public figure for “voluntarily placing himself in the public eye,” and so “severe” attacks against him will also be removed.  However, Floyd is considered an “involuntary” public figure and so the network’s level of protection and moderation efforts are higher.  “Given the risk of violence following the announcement of the verdict, regardless of what it is, we remain in close contact with local, state, and federal law enforcement,” Facebook added. “We will respond to valid legal requests and support any investigations that are in line with our policies. We know this trial has been difficult for many people. But we also realize that being able to discuss what is happening and what it means with friends and loved ones is important.” Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Remote code execution vulnerabilities uncovered in smart air fryer

    In another example of how connectivity can impact our home security, researchers have disclosed two remote code execution (RCE) vulnerabilities in a smart air fryer.

    RCEs are often considered to be some of the most severe types of vulnerabilities as they allow attackers to remotely deploy code, potentially leading to the hijack of a system, remote tampering, and the execution of additional malware payloads. While targeting consumer products and executing an RCE may not have the same immediate impact as doing the same on a corporate network, it is still worth highlighting that just because a product we have in our home is considered ‘smart,’ it does not mean that it is safe.  On Monday, researchers from Cisco Talos revealed the discovery of two RCEs in the Cosori Smart Air Fryer, a Wi-Fi-connected kitchen product that leverages the internet to give users remote control over cooking temperature, times, and settings.  However, it is the same connectivity — when coupled with security flaws — that also allows others to take control of the device, too.  The team tested the Cosori Smart 5.8-Quart Air Fryer CS158-AF (v.1.1.0) and discovered CVE-2020-28592 and CVE-2020-28593. The first vulnerability is caused by an unauthenticated backdoor and the second, a heap-based overflow issue — both of which could be exploited via crafted traffic packets, although local access may be required for easier exploitation.  The vulnerabilities have now been disclosed without any fix. According to Talos researchers, Cosori did not “respond appropriately” within the typical 90-day vulnerability disclosure period, and so — perhaps — now the vendor will consider issuing a patch now the issues are public. 

    While the idea of your cooking utensils being held to ransom by threat actors may be far-fetched, the vulnerabilities represent what is a far wider problem: the general vulnerable state of Internet of Things (IoT) devices in our homes.  Last week, researchers disclosed nine vulnerabilities in four TCP/IP stacks commonly used by smart devices for communication purposes that could be weaponized to remotely hijack them. The security flaws, thought to impact over 100 million consumer, enterprise, and industrial devices, may be exploited to add vulnerable products to botnets or to obtain entry into linked networks.  ZDNet has not heard back from Cosori at the time of publication.  Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Critics label data-sharing Bill as 'eroding privacy in favour of bureaucratic convenience'

    Australia’s pending data-sharing Act has been touted by the government as allowing the public service to make better use of the data it already holds, but Dr Bruce Baer Arnold from the Australian Privacy Foundation would argue it does so at the cost of privacy protections.”The Honourable Stuart Robert has promoted the legislation as providing, ‘Strong privacy and security foundations for sharing within government’. It’s both deeply regrettable and very unsurprising that the Bills do not provide those foundations,” he told the Senate Committee probing the Data Availability and Transparency Bill 2020.”The Bill reflects the ongoing erosion of Australian privacy law in favour of bureaucratic convenience.”He added that he believed the Bill would obfuscate recurrent civil society requests for privacy protections.Also facing the committee was Jonathan Gadir from the NSW Council for Civil Liberties, who highlighted the discrepancy between the goals of the Bill and what it actually allows to occur. “The term ‘public sector data’ is really giving the impression that data contemplated by the Bill is aggregated statistics of some kind — the definition in the Bill is far broader than the goals would require, encompassing ‘all data collected, created, or held by the Commonwealth or on its behalf’,” he said. “This obviously includes detailed personal information. And this kind of information is often intimate and sensitive.”

    Such information, Gadir explained, includes information about relationships and finances, which is disclosed to Centrelink to receive a pension, or disclosed to Immigration as part of a visa application. “People are revealing most intensely intimate parts of their lives right now to Border Force as they beg for permission to be allowed to leave the country,” he said. “So the broad definition of public sector data is not really the right one for this Bill.”He said that if the Bill was really just to improve service delivery, inform policymaking, and allow for research, then there should be a definition of public sector data to reflect that. “Let’s exclude personal information from the definition of public sector data and say that it must be anonymous. Let’s also say the permitted purposes should not include making administrative decisions that will affect individuals,” he continued.”Basic fairness and civil liberties are really under threat when personal information we’re compelled to disclose to a government agency is then spread silently behind the scenes to other agencies or private companies, and is able to be used in surprising and unexpected ways.”Chadwick Wong, senior solicitor at the Public Interest Advocacy Centre, similarly said a fundamental reconsideration of the intention of the legislation was needed.He said the Bill seemed to be “cutting both ways”, that it covered the provision of government services through the use of sharing personal information to enable the “tell us once” idea; while simultaneously covering research and development, which interim National Data commissioner Deborah Anton declared would be largely de-identified data. “That’s two entirely different purposes and you can’t, I would submit, that you can’t really capture them both in the same piece of legislation, especially if one of the proposals is de-identified data,” Wong said.Gadir also raised concerns that the Bill’s passage could come before the completion of the review of the Privacy Act 1988 by the Attorney-General.”This Bill is a really big carve out from the protections of the Privacy Act, applying to a very high risk activity of data-sharing. And this is happening at the same time that another arm of the government is telling us that they want to strengthen the Privacy Act,” he said. Anton earlier stated the Privacy Act would continue to apply, saying that the scheme would not override or change any elements of that.But Gadir said Anton’s characterisation was “not correct”. “I think the Bill should not be passed until we’ve looked at, and ultimately, we’ve fixed, the existing weak regime,” Baer Arnold said of holding off until the Privacy Act review is complete. “This Bill is being driven by institutional imperatives, political convenience, without any regard for human rights.”Baer Arnold said the legislation, as currently drafted, provides very little transparency.”We’re very much relying on individual agencies doing the right thing; individual agencies may well have very different views about what’s appropriate and what’s not,” he said. “We have nice language that government agencies will be custodians.”Baer Arnold is fearful the current Bill, much like what he’s witnessed with previous legislation, could become weakened even if it started out as promising.  “What we see as we start off with sort of lovely motherhood statements from people like Stuart Robert, ‘it will be good, it’s in the national interest, you don’t need to worry, trust us’, and over time, we see a creep, we see an erosion,” he said.”It’s opened up to a range of bodies that we would consider to be inappropriate, it’s opened up to uses that we would consider to be inappropriate, but administratively convenient, and possibly punitive.”He said trust would be misplaced if people believed entities such as the Office of Australian Information Commissioner would somehow “come to the rescue” if a breach occurs.Wong also shared his concerns that it is unknown exactly what particularly sensitive data would be excluded from the regime. Anton earlier testified that COVIDSafe data, as well as that from the electoral roll data and My Health Record, would be prohibited from sharing under the regime.Wong said that without knowing what sort of data would be excluded from the Bill, nor seeing the full suite of regulations and guidelines, it would be hard to determine if the Bill was at odds with human rights privacy obligations.”I think what we need is the full package of proposed reforms before we’re able to comment on some of these privacy issues,” he said.HERE’S MORECommissioner content transparency measures are enough to deter data-sharing Act breachesAustralia’s pending data-sharing Act will require Commonwealth entities to be satisfied with a proposal before sharing data and the reason for obtaining that data will need to be made public.Privacy Commissioner wants more protections for individuals in Data Availability BillAdditionally, the Australian Information Commissioner and Privacy Commissioner’s office is concerned about the proposed exemption of scheme data from the Freedom of Information Act.Bill giving government the nod to share data enters ParliamentAustralian Parliament has risen for 2020, introducing a bunch of Bills, including the Data Availability and Transparency Bill 2020. More

  • in

    Commissioner content transparency measures are enough to deter data-sharing Act breaches

    The Office of the National Data Commissioner considers the measures presented in Australia’s pending Data Availability and Transparency Bill 2020, such as the requirement for transparency, to be enough for deterring breaches of data.The data-sharing Bill is touted by the government as being an opportunity to establish a new framework that is able to proactively assist in designing better services and policies.”The Bill will create a data-sharing scheme overseen by a new and independent National Data Commissioner to allow sharing for the right reasons with the right people, with appropriate controls to manage risk,” interim National Data commissioner Deborah Anton told the Senate Finance and Public Administration Legislation Committee on Tuesday.”The Bill seeks to progress a necessary set of reforms to modernise APS data-sharing practices, to set higher and consistent standards, and to add additional transparency to ensure the public know what is being done with their data.”The purpose test embedded in the Bill states that data shared can only be shared for the delivery of government services, informing policy, and to progress research.The Bill provides what the government is referring to as “layers of safeguards”, including the data sharing principles. The principles guide how risks are assessed and managed and must be applied to each data sharing project across five dimensions: Projects, people, data, settings, and outputs.”One of the challenges with principles-based legislation is the Bill provides signposts not a direct roadmap,” Anton said.

    “So I think what’s always important in these circumstances is to understand, ‘What’s the scenario?’, then going through the flow chart, ‘Well, for what purpose?’, you can only do one of those three purposes and you’ve still got to then explain why that’s in the public interest to do that.”You then have to go through who are we sharing with, why are we sharing them, are we sharing the minimum amount of data for the job that they’re contemplating, at the end of the day, what’s the output — a lot of this is going to be about research.”In order to share data, the “data custodian” — the Commonwealth body that holds the data — must be satisfied the data will be used for an appropriate reason and that there are appropriate safeguards in place.Anton said the onus is ultimately on data custodians.”They don’t have to share … if they don’t think this is a sensible thing to do, and they cannot manage the risks, then they can make a decision not to share and that cannot be overturned,” she continued. “I think the research sector is a little unhappy with us on that design point.”The purpose for which the information can be used must be set out in a publicly available data-sharing agreement.”The data-sharing agreement will provide that it cannot be used for any other purpose,” Assistant Secretary Paul Menzies-McVey added. “So there’s no real capacity for there to be a slippery slope that it was obtained for one purpose and then used for another because it will be clear to the public that the data can’t be used for that purpose and that will be backed up by the penalties in the legislation.”Senators, however, are concerned that the safeguards and rules in place would only work right up until the moment when there’s a breach.Anton and Menzies-McVey pointed again to the penalties.”In order to use the Act, you have to meet the requirements of the Act; if you’re not meeting the requirements of the Act, then the penalties actually rebound to the original legislation under which the data was collected,” Anton explained.”The Bill itself then provides for additional penalties or gap coverage where people are simply not complying with, for example provision of information to the commissioner.”There are a series of enforcement actions which Anton said could ultimately lead to suspension or cancellation of accreditation, injunctions placed on the sharing of data, as well as seeking civil or criminal penalties.”There is a stick to go with the permissive ‘yes, we want to share’, but there are controls at the other end,” she said.Menzies-McVey said that for breach of the mandatory terms of the data-sharing agreement, which includes the requirement to use it only for the agreed purpose, is a civil penalty of 300 penalty units — currently AU$66,600.There are also general penalties, including imprisonment for two years for “intentional reckless breaches”.The Bill, as well as the Data Availability and Transparency (Consequential Amendments) Bill, were both introduced to Parliament in December, after two years of consultation.HERE’S MORE More