More stories

  • in

    VM escape and root access bugs fixed in Cisco NFV infrastructure software

    Written by

    Chris Duckett, APAC Editor

    Chris Duckett
    APAC Editor

    Chris started his journalistic adventure in 2006 as the Editor of Builder AU after originally joining CBS as a programmer. After a Canadian sojourn, he returned in 2011 as the Editor of TechRepublic Australia, and is now the Australian Editor of ZDNet.

    Full Bio

    Image: Thomas Jensen/Unsplash
    Cisco has released patches for a trio of bugs that hit its Enterprise NFV Infrastructure Software, and could result in escaping from virtual machines, running commands as root, and leaking system data. Leading the way with a CVSS score of 9.9 is CVE-2022-20777 and relates to a bug in next generation input/output feature that allowed an authenticated remote attacker to jump out of the guest VM and run commands as root on the host machines via an API call. Cisco obviously points out that such access could compromise the host completely. For unauthenticated remote attackers, CVE-2022-20779 with a CVSS score of 8.8, allows for root commands to be run if an administrator can be convinced to install VM image with crafted metadata that will execute the commands when the VM is registered. Rounding out the trio is a vulnerability dubbed CVE-2022-20780 with a CVSS score of 7.4 that exists in an XML parser and could leak system data. “An attacker could exploit this vulnerability by persuading an administrator to import a crafted file that will read data from the host and write it to any configured VM,” Cisco said. “A successful exploit could allow the attacker to access system information from the host, such as files containing user data, on any configured VM.” Cisco has been under the pump on the security front in the past month, with 64 vulnerabilities either appearing or being updated since April 13. Of that number, a vulnerability in the Cisco Wireless LAN Controller scored a perfect CVSS score of 10 due to an attacker being able to bypass password validation. “An attacker could exploit this vulnerability by logging in to an affected device with crafted credentials,” the company said. “A successful exploit could allow the attacker to bypass authentication and log in to the device as an administrator. The attacker could obtain privileges that are the same level as an administrative user but it depends on the crafted credentials.” To be vulnerable, devices needed to have the MAC filter radius compatibility option set to other. At the same time, Cisco said it had conducted tests with customers on predictive models related to network issues. “Cisco predictive networks work by gathering data from a myriad of telemetry sources. Once integrated, it learns the patterns using a variety of models and begins to predict user experience issues, providing problem solving options,” the company said. “Customers can decide how far and wide they want to connect the engine throughout the network, giving them flexible options to expand as they need.” Related Coverage More

  • in

    How the EPL tackles piracy and stops people going around the wall

    Written by

    Aimee Chanthadavong, Senior Journalist

    Aimee Chanthadavong
    Senior Journalist

    Since completing a degree in journalism, Aimee has had her fair share of covering various topics, including business, retail, manufacturing, and travel. She continues to expand her repertoire as a tech journalist with ZDNet.

    Full Bio

    Image: Ben Stansall/Getty Images
    When a majority of the English Premier League’s income comes from exclusive broadcast deals, it makes sense why the football organisation is committed to cracking down on piracy globally. Speaking to ZDNet, Premier League chief legal counsel Kevin Plumb said that while anti-piracy work has been on the company’s agenda for a long time, making it a priority started with former executive chairman Richard Scudamore, “who really prioritised it alongside broadcast sales because he saw it as two sides of the same coin”. “We know it’s a problem in every territory — not just for sports or the Premier League, it’s for movies, it’s TV shows … and that’s one of the reasons why we opened an office in [Singapore three years ago]. We are pretty loud and proud about our anti-piracy work,” Plumb said. “Back in the day, it used to be a ‘non-secret’ and something we did in the background … but now we’re right at the cutting edge of anti-piracy work and we want to show our broadcasters and our fans that as well.” In fact, Plumb reckons all the anti-piracy work is having a significant impact, pointing out that the company’s revenue for international broadcasting deals will be up by 30% for 2022-25. Based on reports earlier this year by The Times, international deals will be worth £5.3 billion, while domestic rights will bring £5.1 billion, and commercial contracts taking the total to £10.5 billion.  While there are plenty of reasons for the revenue bump up, Plumb believes the company’s anti-piracy work is a contributing factor. “We can comfortably say our anti-piracy work will be one of those factors because if we weren’t so committed, if we weren’t having the impact that I think we are having — and particularly in this part of the world — I think we’ve managed to be quite influential, working with other rights owner as well,” he said. “It’s kind of turning the ship around and sort of getting the momentum back in favour of the rights owner. I think if we just sort of left the situation alone, I’m not sure if we would be in a position where we we’re as happy with the rights sales we have.”

    According to Plumb, the company’s anti-piracy program is shaped by four pillars: Legal action, blocking, lobbying, and education and awareness. He detailed that blocking, for instance, is a method designed to minimise the supply of pirated content. It involves working with vendors to help remove pirate content form search results to make it harder for casual users to locate, as well as tracking down ads on pirate sites to “starve the revenue stream”. “What we look at is the whole journey from logging onto the computer or turning the smart TV to access a pirate stream, and we try to disrupt every part of that journey to make it as difficult as possible for someone to access the stream,” Plumb said. “We try to put as many hurdles up as possible because we find that if you put up one hurdle that dissuades 100 people from carrying on that journey. If you put two that’s 500 people.” Premier League has also been working with local law enforcement globally to ensure that legal action can be taken out against those who are supplying pirate services. For instance, in Singapore and Malaysia, the company secured legal precedent that the sale of Kodi media boxes and the use of them to access pirate content is a criminal offence. “In Singapore three years ago when we when we first came out here, it was really easy to buy these [Kodi] devices in the shops. That process was a pleasant purchasing experience — you bought it from a nice shop, there’ll be a nice salesperson to show you a nice box with nice branding, and it’s all boxed beautifully,” Plumb said. “So, a lot of our emphasis has been trying to stop those shops from selling them and getting them off the streets … that’s why we’ve established that it’s a criminal act now to sell those boxes. “We now routinely sweep those shops, and we’ll do undercover purchases and then we follow it up with legal letters. We’ve reduced the number of those shops by 80% in the last few years.” Meanwhile, in Thailand, Plumb said the Premier League works closely with the Department of Special Investigation to ensure criminals raids are carried out or that local law enforcement turns up at the doorsteps of pirates for a “knock and talk”. But not all country’s legislation is up to scratch when it comes to piracy, conceded Plumb. “We do lots of lobbying work because … we always want legislation to be clear and we’d always want legislation to move with the technology because that is one of the challenges. You have pirates who are really quick, and you’ve got law and legal process which can be deafly slow. How you fit those two bits together is one of our biggest challenges,” he said. Plumb also acknowledged that even though the sale of Kodi media devices may be slowly disappearing from physical store fronts, pirates are likely to sell them through other channels. “What we now expect is that those shops move online, therefore we have to be ready for that — we are sweeping auction sites and Lazada. We’ve removed a few thousand listings from Lazada in the last year,” he said. “And then where do they move then? They move to their own websites, maybe they set up a Facebook profile, so we sweep Facebook and we take them down from Facebook. We always have to be aware of their next step and that does mean we’ll be doing this for a long time.” At the end of the day though, all the anti-piracy work is designed to protect the fans, Plumb said.   “In this part of the world where people are getting up at silly o’clock in the morning to watch their teams play — teams they may have never seen in person — but who they are absolutely fervent fans of … so it’s really important that we protect those people.” Related Coverage More

  • in

    Kubernetes 1.24 Stargazer: An exceptional release with two major changes

    Kubernetes, everyone’s favorite container orchestrator, in its latest release, Kubernetes 1.24 Stargazer, has made two major changes: The developers dropped support for the Docker Engine container runtime and added supply chain security via Sigstore.  First, don’t start hyperventilating because Dockershim has been deprecated. While Dockershim enabled you to use the Docker containerd runtime within Kubernetes, it was never designed to be embedded inside Kubernetes. Further, it’s incompatible with Kubernetes’ Container Runtime Interface (CRI). The fix was for dockershim to bridge the gap between Docker’s containerd and CRI.  Maintaining dockershim, however, was a pain so Kubernetes started deprecating it. As Kat Cosgrove, a Pulumi Developer Advocate and Cloud Native Computing Foundation (CNCF) Ambassador, explained, in Kubernetes’ early days, “We only supported one container runtime. That runtime was Docker Engine. Back then, there weren’t really a lot of other options out there and Docker was the dominant tool for working with containers, so this was not a controversial choice.” 

    But Kubernetes users wanted more runtime choices. They got that with CRI, but the Docker Engine was not CRI-compatible. The fix, Dockershim, filled in the gaps between Docker Engine and CRI. “However,” Cosgrove continued, “this little software shim was never intended to be a permanent solution. Over the course of years, its existence has introduced a lot of unnecessary complexity to the kubelet itself. Some integrations are inconsistently implemented for Docker because of this shim, resulting in an increased burden on maintainers, and maintaining vendor-specific code is not in line with our open-source philosophy.” Unfortunately, Cosgrove admits, the Kubernetes developer community did a poor job of communicating what they were doing by removing Dockerhsim. It also doesn’t help that when we say “Docker,” we might refer to the container image; Docker, the company, or the Docker runtime. By removing Dockershim, we’re referring only to the runtime. Docker containers still run just fine on Kubernetes. As Cosgrove concluded, “Docker is not going away, either as a tool or as a company.” Still, “removing dockershim from kubelet is ultimately good for the community, the ecosystem, the project, and open source at large.  But if you really want to stick with the Docker Engine you can even if Kubernetes no longer natively supports it. Mirantis, which now owns the Docker program, will continue to support Dockershim in Docker Engine and Mirantis Container Runtime with Kubernetes. This new Dockershim program, cri-dockerd, provides a shim for Docker Engine, which enables you to control Docker via the Kubernetes CRI. You can also, of course, switch to one of the supported Kubernetes runtimes, such as containerd v1.6.4 and later, v1.5.11 and later, or CRI-O 1.24 and later. For more on making sure your Kubernetes clustered are ready for the change, see Is Your Cluster Ready for v1.24? In another major development, Kubernetes is now supporting encrypted software artifact signing to improve its software supply chain security. According to founding Sigstore developer Dan Lorenc, Sigstore certificates enable Kubernetes users to verify the authenticity and integrity of the distribution they’re using by “giving users the ability to verify signatures and have greater confidence in the origin of each and every deployed Kubernetes binary, source code bundle, and container image.” The Kubernetes programmers began working on Supply chain Levels for Software Artifacts, (SLSA, pronounced salsa) compliance to improve Kubernetes software supply chain security in 2021. SLSA is a security framework that includes a checklist of standards and controls to prevent tampering, improve the integrity, and secure the packages and infrastructure of software projects.  The Sigstore program, which is SLSA Level 2 compliant, is a major step forward for Kubernetes security. It improves software supply chain security by making it easy to cryptographically sign release files, container images, and binaries. Once signed, the signing record is kept in a tamper-proof public log. This gives software artifacts a safer chain of custody that can be secured traced back to their source.  Kubernetes 1.24 brings other improvements as well. For example, new beta application programming interfaces (APIs) will no longer be enabled in clusters by default. However, existing beta APIs and new versions of them will continue to be enabled by default. In another API change, Kubernetes 1.24 offers beta support for publishing its APIs in the OpenAPI v3 format. There have also been storage and volume changes. Storage capacity tracking now supports exposing currently available storage capacity via CSIStorageCapacity objects and enhances scheduling of pods that use Container Storage Interface (CSI) volumes with late binding. In the meantime, you can resize existing persistent volume with Volume expansion. Work is also underway to migrate the internals of in-tree storage plugins to call out to CSI Plugins while maintaining the original API. So far, the Azure Disk and OpenStack Cinder plugins have both been migrated. Finally, while there are many other changes and improvements, I particularly bring your attention to Kubernetes 1.24’s new optional networking feature, which lets you soft-reserve a range for static IP address assignments to Services. With the manual enablement of this feature, the cluster will prefer automatic assignment from the pool of Service IP addresses, thus reducing the risk of collision. I like this feature a lot. A Service ClusterIP can be assigned either: dynamically, which means the cluster will automatically pick a free IP within the configured Service IP range. statically, which means the user will set one IP within the configured Service IP range. These Service ClusterIP are unique; hence, trying to create a Service with a ClusterIP that has already been allocated will return an error. This makes avoiding otherwise simple-to-make networking errors much simpler.  Usually, companies take their time about moving to a new Kubernetes release. For Stargazer, however, I suggest you consider making an exception. It’s an exceptional release. Related Stories: More

  • in

    GitHub launches new 2FA mandates for code developers, contributors

    GitHub is introducing new rules surrounding developers and two-factor authentication (2FA) security.

    On Wednesday, the Microsoft-owned code repository said that changes will be made to existing authentication rules as “part of a platform-wide effort to secure the software ecosystem through improving account security.”According to Mike Hanley, GitHub’s Chief Security Officer (CSO), GitHub will require any developer contributing code to the platform to enable at least one form of 2FA by the end of 2023. Open source projects are popular and widely used, valuable resources for individuals and the enterprise alike. However, if a threat actor compromises a developer’s account, this could lead to hijacked repos, data theft, and project disruption. Cloud platform provider Heroku, owned by Salesforce, disclosed a security incident in April. A subset of its private git repositories was compromised following the theft of OAuth tokens, potentially leading to unauthorized access to customer repos. GitHub says the software supply chain “starts with the developer,” and has been tightening up its controls with this in mind — noting that developer accounts are “frequent targets for social engineering and account takeover.” Recently, the issue of malicious packages being uploaded to GitHub’s npm registry has also brought software supply chain security to the forefront. In many cases, it isn’t a zero-day vulnerability that causes the collapse of open source projects or gives developers sleepless nights. Instead, it’s the fundamental weaknesses — such as weak password credentials or stolen information — that cyberattackers exploit. However, the code repository has also acknowledged that there can be a trade-off between security and user experience. So, the 2023 deadline will also give the organization the time to “optimize” the GitHub domain before the rules are set in stone. “Developers everywhere can expect more options for secure authentication and account recovery, along with improvements that help prevent and recover from account compromise,” Hanley commented. For GitHub, 2FA implementation may be becoming a pressing issue, with only 16.5% of active GitHub users and 6.44% of npm users adopting at least one form of 2FA. GitHub has already depreciated basic authentication, using usernames and passwords only, in favor of integrating OAuth or Access tokens. The organization has also introduced email-based device verification when 2FA has not been enabled. The current plan is to continue a mandatory 2FA rollout on npm, moving from the top 100 packages to the 500, and then those with over 500 dependants or one million weekly downloads. The lessons learned from this testbed will then be applied to GitHub. “While we are investing deeply across our platform and the broader industry to improve the overall security of the software supply chain, the value of that investment is fundamentally limited if we do not address the ongoing risk of account compromise,” Hanley said. “Our response to this challenge continues today with our commitment to drive improved supply chain security through safe practices for individual developers.” In April, GitHub introduced a new scanning feature to protect developers and stop them from accidentally leaking secrets. The enterprise user feature is an optional check for developers to enable for use during workflows and before a git push is launched.   Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Kubernetes taps Sigstore to thwart open-source software supply chain attacks

    Container orchestrator Kubernetes will now include cryptographically signed certificates, using the Sigstore project created last year by the Linux Foundation, Google, Red Hat and Purdue University in a bid to protect against supply chain attacks.The Sigstore certificates are being used in the just released Kubernetes version 1.24 and all future releases. According to founding Sigstore developer Dan Lorenc, a former member of Google’s open source security team, the use of Sigstore certificates allows Kubernetes users to verify the authenticity and integrity of the distribution they’re using by “giving users the ability to verify signatures and have greater confidence in the origin of each and every deployed Kubernetes binary, source code bundle and container image.”It’s one step forward for open source software development in the battle against software supply chain attacks.The Linux Foundation announced the Sigstore project in March 2021. The new Alpha-Omega open-source supply chain security project, which is backed by Google and Microsoft, also uses Sigstore certificates. Google’s open source security team announced the Sigstore-related project Cosign in May 2021 to simplify signing and verifying container images, as well as the Rekor ‘tamper resistant’ ledger, which lets software maintainers and build systems to record signed metadata to an “immutable record”. According to Lorenc, the Kubernetes release team’s adoption of Sigstore is part of its work on Supply chain Levels for Software Artifacts, or SLSA — a framework developed by Google for internally protecting its software supply chain that’s now a 3-level specification being shaped by Google, Intel, the Linux Foundation and others. Kubernetes 1.23 achieved SLSA Level 1 compliance in version 1.23. “Sigstore was a key project in achieving SLSA level 2 status and getting a headstart towards achieving SLSA level 3 compliance, which the Kubernetes community expects to reach this August,” says Lorenc. Lorenc tells ZDNet that Kubernetes’ adoption of Sigstore is a major step forward for the project because it has about 5.6 million users. The Sigstore project is also approaching Python developers with a new tool for signing Python packages, as well as major package repositories such as Maven Central and RubyGems. Kubernetes serves as critical focal points to help draw attention, take a large amount of work, and has an outsized impact on the entire supply chain he says. These efforts coincide with new projects like the new Package Analysis Project, an initiative by Google and the the Linux Foundation’s Open Source Security Foundation (OpenSSF) to identify malicious packages for popular languages like Python and JavaScript. Malicious packages like are regularly uploaded to popular repositories despite their best efforts, with sometimes devastating consequences for users, according to Google. More

  • in

    This sneaky hacking group hid inside networks for 18 months without being detected

    A previously undisclosed cyber-espionage group is using clever techniques to breach corporate networks and steal information related to mergers, acquisitions and other large financial transactions – and they’ve been able to remain undetected by victims for periods of more than 18 months. Detailed by cybersecurity researchers at Mandiant, who’ve named it UNC3524, the hacking operation has been active since at least December 2019 and uses a range of advanced methods to infiltrate and maintain persistence on compromised networks that set it apart from most other hacking groups. These methods include the ability to immediately re-infect environments after access is removed. It’s currently unknown how initial access is achieved.  

    ZDNet Recommends

    One of the reasons UNC3524 is so successful at maintaining persistence on networks for such a long time is because it installs backdoors on applications and services that don’t support security tools, such as anti-virus or endpoint protection.  SEE: A winning strategy for cybersecurity (ZDNet special report)The attacks also exploit vulnerabilities in Internet of Things (IoT) products, including conference-room cameras, to deploy a backdoor on devices that ropes them into a botnet that can be used for lateral movement across networks, providing access to servers.From here, the attackers can gain a foothold in Windows networks, deploying malware that leaves almost no traces behind at all, while also exploiting built-in Windows protocols, all of which helps the group gain access to privileged credentials to the victim’s Microsoft Office 365 mail environment and Microsoft Exchange Servers. This combination of unmonitored IoT devices, stealthy malware and exploiting legitimate Windows protocols that can pass for regular traffic means UNC3524 is difficult to detect – and it’s also why those behind the attacks have been able to remain on victim networks for significant periods of time without being spotted.  “By targeting trusted systems within victim environments that do not support any type of security tooling, UNC3524 was able to remain undetected in victim environments for at least 18 months,” wrote researchers at Mandiant.  And if their access to Windows was somehow removed, the attackers almost immediately got back in to continue the espionage and data-theft campaign. UNC3524 focuses heavily on emails of employees that work on corporate development, mergers and acquisitions, as well as large corporate transactions. While this might look like it suggests a financial motivation for attacks, the dwell time of months or even years inside networks leads researchers to believe the real motivation for the attacks is espionage. Mandiant researchers say that some of the techniques used by UNC3524 once inside networks overlaps with Russian-based cyber-espionage groups, including APT28 (Fancy Bear) and APT29 (Cozy Bear).  However, they also note that they currently “cannot conclusively link UNC3524 to an existing group”, but emphasise that UNC3524 is an advanced espionage campaign that demonstrates a rarely seen high level of sophistication.  “Throughout their operations, the threat actor demonstrated sophisticated operational security that we see only a small number of threat actors demonstrate,” they said. One of the reasons UNC3524 is so powerful is because it has the ability to stealthily remain undetected with the aid of exploiting lesser-monitored tools and software. Researchers suggest the best opportunity for detection remains network-based logging. In addition to this, because the attacks look to exploit unsecured and unmonitored IoT devices and systems, it’s suggested that “organisations should take steps to inventory their devices that are on the network and do not support monitoring tools”.MORE ON CYBERSECURITY More

  • in

    This unpatched DNS bug could put 'well-known' IoT devices at risk

    Researchers at IoT security firm Nozomi Networks are warning that a popular library for the C programming language for IoT products is vulnerable to DNS cache-poisoning attacks. The bug is 10 years old and, at present, could not be fixed by its maintainers.Nozomi security researcher Andrea Palanca discovered that the Domain Name System (DNS) implementation of uClibc and uClibc-ng C libraries used in several popular IoT products generates predictable, incremental transaction identifiers (IDs) in DNS response and request network communications.       

    Internet of Things

    uClibc stopped being maintained in 2012 after the release of version uClibc-0.9.33.2, while the uClibc-ng fork is designed for use within OpenWRT, a common OS for routers “possibly deployed throughout various critical infrastructure sectors”, according to Palanca.SEE: The Emotet botnet is back, and it has some new tricks to spread malwareuClibc is also known to be used by Linksys, Netgear, and Axis, and Linux distributions, such as Embedded Gentoo, notes Palanca.Nozomi has opted not to disclose the specific IoT devices it tested because the bug is unpatched. However, Palanca notes the devices tested were “a range of well-known IoT devices running the latest firmware versions with a high chance of them being deployed throughout all critical infrastructure.” The uClibc-ng fork is a small C library for developing embedded Linux systems with the advantage of being much smaller than the GNU C Library (glibc). Palanca says he reported the issue to ICS-CERT in September to undertake a VINCE (Vulnerability Information and Coordination Environment) case with CERT/CC. In April, CERT/CC approved his request to proceed with vulnerability disclosure on May 2. The issue is being tracked as ICS-VU-638779, VU#473698. CERT/CC invited uClibc-ng’s maintainer to the VINCE case in mid-March but the developer said he was unable to implement the fix himself and suggested sharing the vulnerability report on the mailing list with a “rather small community” that might be able to help implement a fix.Six months on from the original bug report to ICS-CERT, the bug remains unpatched and serves as a reminder of the challenges in open-source software security and more broadly the software supply chain due to a lack of developer resources and funding.The main risk of DNS-poisoning attacks is that they can force an authentication response. DNS, often described as the ‘phonebook of the internet’, is responsible for translating IP addresses into domain names. A DNS-poisoning attack involves an attacker poisoning DNS records to dupe a DNS client into accepting a forged response, and from making a program reroute network communication to an endpoint they control rather than the correct one. While testing an unnamed IoT device, Palanca noticed the transaction IDs – one of two secret bits in the query-response communication – were incremental. These IDs were generated by uClibc 0.9.33.2, which its original maintainer released in May 2012. “To have a DNS response accepted for a certain DNS request, the aforementioned 5-tuple, the query, and the transaction ID must be correctly set,” explains Palanca in a blogpost.  SEE: Google: Multiple hacking groups are using the war in Ukraine as a lure in phishing attemptsHe says that – because the protocol is DNS, publicly known information includes that destination port, the query is the target that an attacker wants to compromise, the source IP address is the target machine, and that the destination IP address is the address of the DNS server in use in a certain network – the only unknowns remain the source port and the transaction ID. “It is vital that these two parameters are as unpredictable as possible, because if they are not, a poisoning attack could be possible,” notes Palanca. “Given that the transaction ID is now predictable, to exploit the vulnerability an attacker would need to craft a DNS response that contains the correct source port, as well as win the race against the legitimate DNS response incoming from the DNS server.”Exploitability of the issue depends exactly on these factors. As the function does not apply any explicit source port randomization, it is likely that the issue can easily be exploited in a reliable way if the operating system is configured to use a fixed or predictable source port.”    Palanca notes that modern Linux kernels enable OS-level source port randomization, making it more difficult to exploit for DNS-poisoning attacks. However, if an attacker has enough bandwidth, they might be able to “brute-force the 16 bit source port value by sending multiple DNS responses, while simultaneously winning the race against the legitimate DNS response.” More

  • in

    Transport for NSW struck by cyber attack

    Written by

    Aimee Chanthadavong, Senior Journalist

    Aimee Chanthadavong
    Senior Journalist

    Since completing a degree in journalism, Aimee has had her fair share of covering various topics, including business, retail, manufacturing, and travel. She continues to expand her repertoire as a tech journalist with ZDNet.

    Full Bio

    Transport for NSW has confirmed its Authorised Inspection Scheme (AIS) online application was impacted by a cyber incident in early April. The AIS authorises examiners to inspect vehicles to ensure a minimum safety standard. To become an authorised examiner, online applications need to be submitted and requires applicants to share personal details including their full name, address, phone number, email address, date of birth, and driver’s licence number. According to Transport for NSW, the incident saw an unauthorised third-party successfully access a “small number” of the application’s user accounts. “We recognise that data privacy is paramount and deeply regret that customers may be affected by this attack,” Transport for NSW said. “Scammers may try to capitalise on these events. Customers should not respond to unsolicited phone calls, emails or text messages from anyone claiming to be from Transport for NSW related to any security matter.” Transport for NSW said it is notifying affected examiners individually and will provide options to help them avoid further impacts from the incident. Additionally, security measures have also been put in place, Transport for NSW assured and highlighted monitoring of the application continues. This latest breach comes just over a year after Transport for NSW said it was being impacted by a cyber attack on a file transfer system owned by Accellion.The Accellion system was widely used to share and store files by organisations around the world, including Transport for NSW.At the end of last year, the state’s auditor-general Margaret Crawfound found none of NSW’s lead cluster agencies — including Transport — had implemented all Essential Eight controls, which was a cause for “significant concern”.”Key elements to strengthen cybersecurity governance, controls, and culture are not sufficiently robust and not consistently applied. There has been insufficient progress to improve cyber security safeguards across NSW government agencies,” the auditor-general wrote in a compliance report [PDF] about the state’s cybersecurity capabilities.Related Coverage More