More stories

  • in

    It's time to improve Linux's security

    Is Linux more secure than Windows? Sure. But that’s a very low bar. Kees Cook, a Linux security expert, Debian Linux developer, and Google Security Engineer, is well aware that Linux could be more secure. As Cook tweeted, “We need more investment in bug fixers, reviewers, testers, infrastructure builders, toolchain devs, and security devs.”

    Open Source

    Cook details what he means in his Google Security Blog, “Linux Kernel Security Done Right.” Cook wrote, “the Linux kernel runs well: when driving down the highway, you’re not sprayed in the face with oil and gasoline, and you quickly get where you want to go. However, in the face of failure, the car may end up on fire, flying off a cliff.” This is true. With great power comes great responsibility. You can do almost anything with Linux, but you can also completely ruin your Linux system with a single command. And, that’s only the ultra-powerful commands, which you should only use with the greatest of caution. Cook is referring to the other, far less visible security problems buried deep in Linux. As Cook said, while Linux enables us to do amazing things, “What’s still missing, though, is sufficient focus to make sure that Linux fails well too. There’s a strong link between code robustness and security: making it harder for any bugs to manifest makes it harder for security flaws to manifest. But that’s not the end of the story. When flaws do manifest, it’s important to handle them effectively.” That isn’t easy, as Cook points out. Linux is written in C, which means “it will continue to have a long tail of associated problems. Linux must be designed to take proactive steps to defend itself from its own risks. Cars have seat belts not because we want to crash, but because it is guaranteed to happen sometimes.” While moving forward, some of Linux will be written in the far safer Rust, C will remain the foundation of Linux for at least another generation. That means Cook continued, “though everyone wants a safe kernel running on their computer, phone, car, or interplanetary helicopter, not everyone is in a position to do something about it. Upstream kernel developers can fix bugs but have no control over what downstream vendors incorporate into their products. End-users choose their products but don’t usually have control over what bugs are fixed or what kernel is used. Ultimately, vendors are responsible for keeping their product’s kernels safe.” This is difficult. Cook observed, “the stable kernel releases (“bug fixes only”) each contain close to 100 new fixes per week. Faced with this high rate of change, a vendor can choose to ignore all the fixes, pick out only ‘important’ fixes, or face the daunting task of taking everything.”

    Believe it or not, many vendors, especially in the Internet of Things (IoT), choose not to fix anything. Sure, they could do it. Several years ago, Linus Torvalds, Linux’s creator, pointed out that “in theory, open-source [IoT devices] can be patched. In practice, vendors get in the way.” Cook remarked, with malware here, botnets there, and state attackers everywhere, vendors certainly should protect their devices, but, all too often, they don’t. “Unfortunately, this is the very common stance of vendors who see their devices as just a physical product instead of a hybrid product/service that must be regularly updated.” Linux distributors, however, aren’t as neglectful. They tend to “‘cherry-pick only the ‘important’ fixes. But what constitutes ‘important’ or even relevant? Just determining whether to implement a fix takes developer time.”  It hasn’t helped any that Linus Torvalds has sometimes made light of security issues. For example, in 2017, Torvalds dismissed some security developers’ [as] “f-cking morons.” He didn’t mean to put all security developers in the same basket, but his colorful language set the tone for too many Linux developers. So it was that David A. Wheeler, The Linux Foundation’s director of open-source supply chain security, said in the Report on the 2020 FOSS Contributor Survey that “it is clear from the 2020 findings that we need to take steps to improve security without overburdening contributors.” 

    In Linux distributor circles, Cook continued, “The prevailing wisdom has been to choose vulnerabilities to fix based on the Mitre Common Vulnerabilities and Exposures (CVE) list.” But this is based on the faulty assumption that “all-important flaws (and therefore fixes) would have an associated CVE.” They don’t. Therefore,  given the volume of flaws and their applicability to a particular system, not all security flaws have CVEs assigned, nor are they assigned in a timely manner. Evidence shows that for Linux CVEs, more than 40% had been fixed before the CVE was even assigned, with the average delay being over three months after the fix. Some fixes went years without having their security impact recognized. On top of this, product-relevant bugs may not even classify for a CVE. Finally, upstream developers aren’t actually interested in CVE assignments; they spend their limited time actually fixing bugs. In short, if you rely on cherry-picking CVEs, you’re “all but guaranteed to miss important vulnerabilities that others are actively fixing, which is almost worse than doing nothing since it creates the illusion that security updates are being appropriately handled.” Cook continued:  So what is a vendor to do? The answer is simple, if painful: continuously update to the latest kernel release, either major or stable. Tracking major releases means gaining security improvements along with bug fixes, while stable releases are bug fixes only. For example, although modern Android phones ship with kernels that are based on major releases from almost two to four years earlier, Android vendors do now, thankfully, track stable kernel releases. So even though the features being added to newer major kernels will be missing, all the latest stable kernel fixes are present. Performing continuous kernel updates (major or stable) understandably faces enormous resistance within an organization due to fear of regressions — will the update break the product? The answer is usually that a vendor doesn’t know, or that the update frequency is shorter than their time needed for testing. But the problem with updating is not that the kernel might cause regressions; it’s that vendors don’t have sufficient test coverage and automation to know the answer. Testing must take priority over individual fixes.  How can software vendors possibly do that? Cook considers it a painful, but in the end, “simple resource allocation problem, and is more easily accomplished than might be imagined: downstream redundancy can be moved into greater upstream collaboration.” What does that mean? Cook explained: With vendors using old kernels and backporting existing fixes, their engineering resources are doing redundant work. For example, instead of 10 companies each assigning one engineer to backport the same fix independently, those developer hours could be shifted to upstream work where 10 separate bugs could be fixed for everyone in the Linux ecosystem. This would help address the growing backlog of bugs. Looking at just one source of potential kernel security flaws, the syzkaller dashboard shows the number of open bugs is currently approaching 900 and growing by about 100 a year, even with about 400 a year being fixed. He makes an excellent point. I know dozens of developers who spend their days porting changes from the stable kernel into their distribution-specific kernels. It’s useful work, but Cook’s right; much of it consists of duplicating efforts.   In addition, Cook suggests that “Beyond just squashing bugs after the fact, more focus on upstream code review will help stem the tide of their introduction in the first place, with benefits extending beyond just the immediate bugs caught. Capable code review bandwidth is a limited resource. Without enough people dedicated to upstream code review and subsystem maintenance tasks, the entire kernel development process bottlenecks.” This is a known problem. One major reason why the University of Minnesota playing security games with the Linux kernel developers annoyed the programmers so much was that it wasted their time. And, as Greg Kroah-Hartmann,  the Linux kernel maintainer for the stable branch, tartly observed, “Linux kernel developers do not like being experimented on; we have enough real work to do.”  Amen. The Linux kernel maintainers must oversee hundreds, even thousands, of code updates a week. As Cook remarked, “long-term Linux robustness depends on developers, but especially on effective kernel maintainers. … Maintainers are built not only from their depth of knowledge of a subsystem’s technology but also from their experience with the mentorship of other developers and code reviews. Training new reviewers must become the norm, motivated by making the upstream review part of the job. Today’s reviewers become tomorrow’s maintainers. If each major kernel subsystem gained four more dedicated maintainers, we could double productivity.” Besides simply adding more reviewers and maintainers, Cook also thinks, “improving Linux’s development workflow is critical to expanding everyone’s ability to contribute. Linux’s ’email only’ workflow is showing its age. Still, the upstream development of more automated patch tracking, continuous integration, fuzzing, coverage, and testing will make the development process significantly more efficient.” And, as DevOps continuous integration and delivery (CI/CD) users know, shifting testing into the early stages of development is much more efficient. Cook observed, “it’s more effective to test during development. When tests are performed against unreleased kernel versions (e.g. Linux-next) and reported upstream, developers get immediate feedback about bugs. Fixes can be developed before a flaw is ever actually released; it’s always easier to fix a bug earlier than later.”

    But, there’s still more to be done. Cook believes we “need to proactively eliminate entire classes of flaws, so developers cannot introduce these types of bugs ever again. Why fix the same kind of security vulnerability 10 times a year when we can stop it from ever appearing again?” This is already being done in the Linux kernel. For example, “Over the last few years, various fragile language features and kernel APIs have been eliminated or replaced (e.g. VLAs, switch fallthrough, addr_limit). However, there is still plenty more work to be done. One of the most time-consuming aspects has been the refactoring involved in making these usually invasive and context-sensitive changes across Linux’s 25 million lines of code.” It’s not just the code that needs cleaning of inherent security problems. Cook wants “the compiler and toolchain … to grow more defensive features (e.g. variable zeroing, CFI, sanitizers). With the toolchain technically “outside” the kernel, its development effort is often inappropriately overlooked and underinvested. Code safety burdens need to be shifted as much as possible to the toolchain, freeing humans to work in other areas. On the most progressive front, we must make sure Linux can be written in memory-safe languages like Rust.” So, what can you do to help this process? Cook proclaimed you shouldn’t wait for another minute:   If you’re not using the latest kernel, you don’t have the most recently added security defenses (including bug fixes). In the face of newly discovered flaws, this leaves systems less secure than they could have been. Even when mediated by careful system design, proper threat modeling, and other standard security practices, the magnitude of risk grows quickly over time, leaving vendors to do the calculus of determining how old a kernel they can tolerate exposing users to. Unless the answer is ‘just abandon our users,’ engineering resources must be focused upstream on closing the gap by continuously deploying the latest kernel release. Specifically, Cook concluded, “Based on our most conservative estimates, the Linux kernel and its toolchains are currently underinvested by at least 100 engineers, so it’s up to everyone to bring their developer talent together upstream. This is the only solution that will ensure a balance of security at reasonable long-term cost.” So are you ready for the challenge? I hope so. Linux is far too important across all of the technology for us not to do our best to protect it and harden its security. 
    Related Stories: More

  • in

    Bugs in Chrome's JavaScript engine can lead to powerful exploits. This project aims to stop them

    A new project hopes to beef up the security of V8, a part of the Chrome browser that most users aren’t aware of but that hackers increasingly see as a juicy target.JavaScript makes the web go around and Google has had to patch multiple zero-day or previously unknown flaws in Chrome’s V8 JavaScript engine this year. In April, Google admitted a high severity bug in V8 tracked as CVE-2021-21224 was being exploited in the wild. Chrome has over two billion users, so when zero-day exploits strike Chrome, it’s a big deal. V8, an open source Google project, is a powerful JavaScript engine for Chrome that’s helped advance the web and web applications. V8 also powers the server-side runtime Node.js.    Now Samuel Groß, a member of the Google Project Zero security researchers team, has detailed a V8 sandbox proposal to help protect its memory from nastier bugs in the engine using virtual machine and sandboxing technologies. “V8 bugs typically allow for the construction of unusually powerful exploits. Furthermore, these bugs are unlikely to be mitigated by memory safe languages or upcoming hardware-assisted security features such as MTE or CFI,” explains Groß, referring to security technologies like Microsoft’s Control-flow integrity (CFI) and Intel’s control-flow enforcement technologies (CET). ” As a result, V8 is especially attractive for real-world attackers.”Groß’s comments suggest that even adopting a memory-safe language like Rust — which Google has adopted for new Android code — wouldn’t immediately solve the security problems faced by V8, which is written in C++.  

    He also outlines the broad design objectives but, signaling the size of the project, stresses that this sandbox project is in its infancy and that there are some big hurdles to overcome. But V8 is a Google-led open source project and given that V8 has been the source of security vulnerabilities in Chrome, there is a chance that member of GPZ’s proposal could make it across the line.The issues affect how browser software interacts with hardware beyond the operating system and aims to prevent future flaws in V8 from corrupting a computer’s memory outside of the V8 heap. This would allow an attacker to execute malicious code. One consideration for the additional security protections for V8 is the impact on hardware performance. Groß estimates his proposal would cause an overhead of about “1% overall on real-world workloads”. Groß explains the problem with V8 that stems from JIT compilers that can be used trick a machine into emitting machine code that corrupts memory at runtime. “Many V8 vulnerabilities exploited by real-world attackers are effectively 2nd order vulnerabilities: the root-cause is often a logic issue in one of the JIT compilers, which can then be exploited to generate vulnerable machine code (e.g. code that is missing a runtime safety check). The generated code can then in turn be exploited to cause memory corruption at runtime.”He also highlights the shortcomings of the latest security technologies, including hardware-based mitigations, that will make V8 an attractive target for years to come and hence is why V8 may need a sandbox approach. These include:The attacker has a great amount of control over the memory corruption primitive and can often turn these bugs into highly reliable and fast exploitsMemory safe languages will not protect from these issues as they are fundamentally logic bugsDue to CPU side-channels and the potency of V8 vulnerabilities, upcoming hardware security features such as memory tagging will likely be bypassable most of the timeDespite downplaying the likelihood of the new V8 sandbox actually being adopted, the researcher seems upbeat about its prospects for doing its intended job by requiring an attacker chain together two separate vulnerabilities in order to execute code of their choice. “With this sandbox, attackers are assumed to be able to corrupt memory inside the virtual memory cage arbitrarily and from multiple threads, and will now require an additional vulnerability to corrupt memory outside of it, and thus to execute arbitrary code,” he wrote. More

  • in

    Google Cloud Security joins Exabeam-led cybersecurity alliance

    Exabeam and seven other cybersecurity companies announced the creation of the XDR Alliance on Tuesday, touting the effort as a way to help downstream SecOps teams. Google Cloud Security, Mimecast, Netskope, SentinelOne, Armis, Expel and ExtraHop joined Exabeam in founding the alliance centered on XDR — short for extended detection and response framework and architecture. The companies said the end goal of the partnership is to “enable organizations everywhere to protect themselves against the growing number of cyber attacks, breaches, and intrusions” by helping security teams evolve and ensuring interoperability across the XDR security vendor solutions set.The alliance will also work together on campaigns to popularize XDR and assist SecOps teams in integrating “new and evolving applications and technologies.”Gorka Sadowski, chief strategy officer at Exabeam and founder of the XDR Alliance, said the XDR Alliance “brings together the most forward thinking names in cybersecurity to collaborate on building an XDR framework that is open and will make it easier for security operations teams to protect and secure their organizations.””History will look back and declare how well the cybersecurity industry succeeded in putting collaboration above competition to help protect our organizations and institutions,” Sadowski said. “We are at an inflection point with an extremely fragmented industry that requires all of us in the vendor community to come together to strengthen organizations’ SOCs.”The alliance created a three-tier model that focuses on the core components of the XDR technology stack. The three tiers include data sources/control points, XDR Engine, and content.

    “Data sources/control points refers to the security tooling that generates telemetry, logs and alerts, and that act as control points for response. The XDR Engine tier is the engine that ingests all the collected data and performs broad threat detection, investigation and response for SOC operations,” the alliance said in a statement.  “The Content tier includes the pre-packaged content and workflows that allow security organizations to deliver on required use cases with maximum efficiency and automation.”Part of what drew the cybersecurity companies to the alliance is that each represents one of the subcategories under SecOps, which include network detection and response, security information and event management, security analytics, identity management and more.Sunil Potti, Google Cloud VP and GM of Cloud Security, explained that security operations teams are demanding more from their tools as the threat landscape continues to grow. Organizations now need a platform to cost effectively store and analyze all of their security data in one place and investigate and detect threats with speed and scale, Potti said, adding that enterprises now need the ability to store vast amounts of data, analyze and correlate the data from siloed solutions in order to adequately detect and respond to emerging threats within their environments.”We are looking forward to joining the XDR Alliance to help build an inclusive and open XDR framework that gives our joint customers a pathway to the best-in-class Security Operations Centers (SOCs) in the Cloud,” Potti said. There is an XDR Alliance member application page for organizations interested in joining. Exabeam CEO Michael DeCesare added that many of the companies share customers and are looking to improve the SOC experience. The emergence of “covert AI and automated attacks” as well as other threats prompted the companies to unite, DeCesare explained.  More

  • in

    Raccoon stealer-as-a-service will now try to grab your cryptocurrency

    Raccoon Stealer has been upgraded by its developer in order to steal cryptocurrency alongside financial information. 

    On Tuesday, Sophos released new research into the stealer-as-a-service, a bolt-on for threat actors to use as an additional tool for data theft and revenue. In a new campaign tracked by the team, the malware was spread not through spam emails — the usual initial attack vector linked to Raccoon Stealer — but, instead, droppers disguised as installers for cracked and pirated software.  Samples obtained by Sophos revealed that the stealer is being bundled with malware including malicious browser extensions, cryptocurrency miners, the Djvu/Stop consumer ransomware strain, and click-fraud bots targeting YouTube sessions.  Raccoon Stealer is able to monitor for and collect account credentials, cookies, website “autofill” text, and financial information that may be stored on an infected machine. However, the upgraded stealer also has a “clipper” for cryptocurrency-based theft. Wallets, and their credentials, in particular, are targeted by the QuilClipper tool, as well as Steam-based transaction data. “QuilClipper steals cryptocurrency and Steam transactions by continuously monitoring the system clipboard of Windows devices it infects, watching for cryptocurrency wallet addresses and Steam trade offers by running clipboard contents through a matrix of regular expressions to identify them,” the researchers noted. 

    The stealer operates through a Tor-based command-and-control (C2) server to handle data exfiltration and victim management. Each Raccoon executable is tied with a signature specific to each client.  “If a sample of their malware shows up on VirusTotal or other malware sites, they can trace it back to the customer who may have leaked it,” Sophos says.  Raccoon is offered as a stealer-for-hire, with the developers behind the malware offering their creation to other cybercriminals for a fee. In return, the malware is frequently updated.  Usually found in Russian underground forums, Raccoon has also been spotted for the last few years in English language forums, too — for as little as $75 for a weekly subscription. According to the researchers, over a six-month period, the malware was used to steal at least $13,000 in cryptocurrency from its victims, and when bundled with miners, a further $2,900 was stolen.  The developer earned roughly $1200 in subscription fees, together with a cut of their user’s proceeds.  “It’s these kinds of economics that make this type of cybercrime so attractive — and pernicious,” Sophos says. “Multiplied over tens or hundreds of individual Raccoon actors, it generates a livelihood for Raccoon’s developers and a host of other supporting malicious service providers that allows them to continue to improve and expand their criminal offerings.” Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Get a lifetime VPN subscription and 10TB of cloud backup for under $65

    These are dangerous times for our data. We not only need to protect our files from our own carelessness but also our sensitive information from being stolen online. The Lifetime Backup & Security Subscription Bundle covers all of that, so we never need to worry about it again.

    As always, we need to be careful about backing up our files, to avoid the chaos that would result from losing them. And the easier that chore is, the more likely we are to perform it. Degoo Premium: Lifetime 10TB Backup Plan not only provides high-speed data transfers with the ultimate security of 256-bit AES encryption, but it duplicates your backup even as you are performing it, giving you twice the amount of protection against data loss. Best of all, the generous 10TB storage will save you from the frustration of constantly having to purge files because you’re running out of space.Degoo has a 4.4 out of 5-star rating among more than 595,000 reviews on Google Play and a rating of 4.5 out of 5 stars from 6,500 reviewers on the App Store.The second part of this bundle is KeepSolid VPN Unlimited: Lifetime Subscription (5 Devices). KeepSolid is the bestselling VPN of all time for good reason. It has no limits on speed or bandwidth and offers access to over 500 servers in more than 80 locations around the world, plus the utmost security and privacy. You get military-grade encryption, a kill switch, and a strict zero-logging policy.KeepSolid VPN is well-loved by both users and reviewers. The service has over 10 million customers worldwide, PCMag named it Top VPN and Laptop Review Pro awarded it “Best VPN for Laptop”. Tech.Co explains why: “From its simple interface to its genuinely practical features, VPN Unlimited has plenty to recommend it.”The services in this would normally cost $3,799. For a limited time only, get The Lifetime Backup & Security Subscription Bundle for $62.99 with code ANNUAL30.

    ZDNet Recommends More

  • in

    Supply chain attacks are getting worse, and you are not ready for them

    The European Union Agency for Cybersecurity (ENISA) has analyzed 24 recent software supply chain attacks and concluded that strong security protection is no longer enough. Recent supply chain attacks in its analysis include those through SolarWinds Orion software, CDN provider Mimecast, developer tool Codecov, and enterprise IT management firm Kaseya. ENISA focuses on Advanced Persistent Threat (APT) supply chain attacks and notes that while the code, exploits and malware was not considered “advanced”, the planning, staging, and execution were complex tasks. It notes 11 of the supply chain attacks were conducted by known APT groups. 

    “These distinctions are crucial to understand that an organization could be vulnerable to a supply chain attack even when its own defences are quite good and therefore the attackers are trying to explore new potential highways to infiltrate them by moving to their suppliers and making a target out of them,” ENISA notes in the report. SEE: Network security policy (TechRepublic Premium)The agency expects supply chain attacks to get a lot worse: “This is why novel protective measures to prevent and respond to potential supply chain attacks in the future while mitigating their impact need to be introduced urgently,” it said.ENISA’s analysis found that attackers focused on the suppliers’ code in about 66% of reported incidents. The same proportion of vendors were not aware of the attack before it was disclosed. 

    “This shows that organisations should focus their efforts on validating third-party code and software before using them to ensure these were not tampered with or manipulated,” ENISA said, although this is something easier said than done.As the Linux Foundation highlighted in the wake of the SolarWinds disclosure, even reviewing source code – for both open source and unaudited proprietary software – probably wouldn’t have prevented that attack. ENISA is calling for coordinated action at an EU level and has outlined nine recommendations that customers and vendors should take. Recommendations for customers include:identifying and documenting suppliers and service providers;defining risk criteria for different types of suppliers and services such as supplier and customer dependencies, critical software dependencies, single points of failure;monitoring of supply chain risks and threats;managing suppliers over the whole lifecycle of a product or service, including procedures to handle end-of-life products or components;classifying of assets and information shared with or accessible to suppliers, and defining relevant procedures for accessing and handling them.ENISA recommends suppliers:ensure that the infrastructure used to design, develop, manufacture, and deliver products, components and services follows cybersecurity practices;implement a product development, maintenance and support process that is consistent with commonly accepted product development processes;monitor security vulnerabilities reported by internal and external sources, including third-party components;maintain an inventory of assets that includes patch-relevant information.The SolarWinds attack for example rattled Microsoft whose president Brad Smith said it was the “largest and most sophisticated attack the world has ever seen” and that it probably took 1,000 engineers to pull off. Alleged Russian intelligence hackers compromised SolarWinds’ software build system for Orion to plant a backdoor that was distributed as a software to several US cybersecurity firms and multiple federal agencies. SEE: The cybersecurity jobs crisis is getting worse, and companies are making basic mistakes with hiringThe US Department of Justice (DoJ) revealed last week that 27 districts’ Microsoft Office 365 email systems were compromised for at least six months beginning in May 2020.The rise of state-sponsored supply chain attacks and criminal ransomware attacks that combine supply chain attacks, such as the Kaseya incident, has shifted the focus of discussions between the US and Russia. US president Joe Biden last week said a major cyberattack would be the likely cause of the US entering a “real shooting war” with another superpower.  More

  • in

    DeadRinger: Chinese APTs strike major telecommunications companies

    Researchers have disclosed three cyberespionage campaigns focused on compromising networks belonging to major telecommunications companies. 

    On Tuesday, Cybereason Nocturnus published a new report on the cyberattackers, believed to be working for “Chinese state interests” and clustered under the name “DeadRinger.”According to the cybersecurity firm, the “previously unidentified” campaigns are centered in Southeast Asia — and in a similar way to how attackers secured access to their victims through a centralized vendor in the cases of SolarWinds and Kaseya, this group is targeting telcos.  Cybereason believes the attacks are the work of advanced persistent threat (APT) groups linked to Chinese state-sponsorship due to overlaps in tactics and techniques with other known Chinese APTs. Three clusters of activity have been detected with the oldest examples appearing to date back to 2017. The first group, believed to be operated by or under the Soft Cell APT, began its attacks in 2018. The second cluster, said to be the handiwork of Naikon, surfaced and started striking telcos in the last quarter of 2020, continuing up until now. The researchers say that Naikon may be associated with the Chinese People’s Liberation Army’s (PLA) military bureau.  Cluster three has been conducting cyberattacks since 2017 and has been attributed to APT27/Emissary Panda, identified through a unique backdoor used to compromise Microsoft Exchange servers up until Q1 2021. 

    Techniques noted in the report included the exploitation of Microsoft Exchange Server vulnerabilities — long before they were made public — the deployment of the China Chopper web shell, the use of Mimikatz to harvest credentials, the creation of Cobalt Strike beacons, and backdoors to connect to a command-and-control (C2) server for data exfiltration.Cybereason says that in each attack wave, the purpose of compromising telecommunications firms was to “facilitate cyber espionage by collecting sensitive information, compromising high-profile business assets such as the billing servers that contain Call Detail Record (CDR) data, as well as key network components such as the domain controllers, web servers and Microsoft Exchange servers.” In some cases, each group overlapped and were found in the same target environments and endpoints, at the same time. However, it is not possible to say definitively whether or not they were working independently or are all under the instruction of another, central group. “Whether these clusters are in fact interconnected or operated independently from each other is not entirely clear at the time of writing this report,” the researchers say. “We offered several hypotheses that can account for these overlaps, hoping that as time goes by more information will be made available to us and to other researchers that will help to shed light on this conundrum.” Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Auditor finds WA Police accessed SafeWA data 3 times and the app was flawed at launch

    Image: Getty Images
    The Auditor-General of Western Australia has handed down her report into the state’s COVID-19 check-in app, SafeWA, revealing that not only did police access its data, but the app had a number of flaws when it was released.WA Health delivered the SafeWA app in November 2020 to carry out COVID contact tracing.In its report [PDF], the Office of the Auditor-General (OAG) said it was concerned about the use of personal information collected through SafeWA for purposes other than COVID contact tracing. In mid-June, the WA government introduced legislation to keep SafeWA information away from law enforcement authorities after it was revealed the police force used it to investigate “two serious crimes”. The public messaging around the app was that it would be used only for COVID contact tracing purposes.See also: Australia’s cops need reminding that chasing criminals isn’t society’s only need”In March 2021, in response to our audit questioning around data access and usage, WA Health revealed it had received requests and policing orders under the Criminal Investigation Act 2006 to produce SafeWA data to the WA Police Force,” the report said. The WA Police Force ordered access to the data on six occasions and requested access on one occasion. The orders were issued by Justices of the Peace after application by the WA Police Force.

    The WA Police Force was granted orders to access SafeWA data for matters under investigation, including an assault that resulted in a laceration to the lip, a stabbing, a murder investigation, and a potential quarantine breach.The OAG said WA Health ultimately provided access in response to three of the orders before the passage of the legislation. Applications made to WA Health on December 14, December 24, and March 10 were provided to the cops; applications on February 24, April 1, May 7, and May 27 were not. The SafeWA Privacy Policy, which users are required to agree to prior to use, details that WA Health collects, processes, holds, discloses, and uses personal information of people who access and use the SafeWA mobile application. The OAG said it also states that information on individuals may be disclosed to other entities such as law enforcement, courts, tribunals, or other relevant entities.The information that SafeWA captures includes sensitive personal information such as name, email address, phone number, venue or event visited, time and date, and information about the device used to check-in.  As of 31 May 2021, over 1.9 million individuals and 98,569 venues were registered in the SafeWA application. The total number of check-in scans between December 2020 and May 2021 exceeded 217 million.  In addition to police accessing contact tracing data, shortly after the initial release of SafeWA, the app suffered a system outage due to poor management of changes, with the OAG saying this put the availability of SafeWA at risk.”WA Health has addressed this risk and continues to manage the vendor contract which has required changes as the state’s strategy on the use of SafeWA has evolved,” the report said.The app was delivered by GenVis and is hosted in the Amazon Web Services (AWS) cloud. The total contract value was initially AU$3 million, but it has since risen to AU$6.1 million over three years.    GenVis said it has processes in place to delete check-in data 28 days after collection. Should a member of the public test positive for COVID-19 or qualify as a close contact, WA Health may store a subset of the data relevant to that case indefinitely. The OAG said this is contrary to WA Health’s logging and monitoring standard, which requires retention for at least seven years and where possible, for the lifecycle of the system.Of further concern to the OAG was that WA Health does not monitor SafeWA access logs to identify unauthorised or inappropriate access to SafeWA information.The OAG also raised issues with WA Health and GenVis’ ability to only request, not enforce, that AWS not transfer, store, or process data outside Australia.WA Health uses provider-managed encryption keys for SafeWA, which are stored in the AWS database, instead of self-managed keys where the cloud provider has no visibility or access to them. “WA Health advised us that the current solution is required so that AWS can access keys through software to perform platform maintenance and support the vendor with technical issues,” the report said. “Although the likelihood is low, the cloud provider could be required to disclose SafeWA information to overseas authorities as it is subject to those laws.”See also: Attorney-General urged to produce facts on US law enforcement access to COVIDSafePrior to going live, WA Health identified that SafeWA registration could be completed with an incorrect number or someone else’s phone number, the OAG added. “This was because SafeWA did not fully verify a user’s phone number during the registration process,” it said. “Due to the timing of SafeWA development and WA Health’s need to balance risk with implementation, this issue was only partially resolved prior to going live. The remaining weaknesses could be exploited to register fake accounts and check-ins.”The issue was resolved in February.It was not just the cops that may have accessed contact tracing data, however, with the OAG noting it was concerned also about the limited communication around WA Health’s use of personal information collected by other government entities, including Transperth SmartRider, Police G2G border crossing pass data, and CCTV footage in its contact tracing efforts. During the audit, the OAG also identified that WA Health’s Mothership and Salesforce-based Public Health COVID Unified System (PHOCUS) accesses SafeWA data. “When WA Health receives confirmation of a positive COVID-19 case from a pathology clinic, it uses PHOCUS to collate data relevant to the case from several sources,” the report says”WA Health has not provided enough information to the community about other personal information it accesses to assist its contact tracing efforts.”The Mothership contact tracing application, OAG said, has security weaknesses, including a weak password policy and inconsistent use of multi-factor authentication. The OAG is preparing a separate report focused on the Mothership and PHOCUS.RELATED COVERAGE More