More stories

  • in

    Coffee machines, cuddly toys and cars: The Internet of Things devices which could put you at risk from hackers

    Connected teddy bears, connected coffee machines and connected cars are just some of the unusual Internet of Things (IoT) devices being insecurely connected to corporate networks which could leave whole organisations open to cyber attacks.
    A research paper by Palo Alto Networks details the surge in IoT devices being connected to corporate networks and their wide variety.
    Some the most common irregular devices being connected to organisations’ networks include connected vehicles, connected toys and connected medical devices, with connected sports equipment such as fitness trackers, gaming devices and connected cars also being deployed.
    These devices are being connected because they can often help people through the working day or help manage aspects of their personal life, but they’re also creating additional problems for the corporate network.
    In many cases, these ‘shadow IoT’ devices which are being added to the network without the knowledge of the security team.
    SEE: Cybersecurity: Let’s get tactical (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic)
    This could potentially leave the corporate network vulnerable because not only do some IoT devices have poor security which means they can easily be discovered and exploited, the way some workplaces still have flat networks means that if a device is compromised, an attacker can move from the IoT product to another system.
    “If a device has an IP address it can be found. Sadly all too often they fail to have the most basic or complete lack of cyber security controls, using standard passwords, having no patching process and no basic firewall controls,” Greg Day, VP and CSO for EMEA at Palo Alto Networks told ZDNet.
    “Considering some are so cheap, the cost of adding security simply isn’t considered viable”.
    Even IoT devices which have been connected to the network by the organisation itself can contain security vulnerabilities which can allow hackers to gain full access to the network. One famous example of this saw cyber criminals exploit a connected fish tank to hack into the network of a casino and steal information about customers.
    Many organisations need to get a better hold of the IoT devices that are connected to the corporate network and only then can they look to secure them from being exploited if they’re discovered by cyber attackers.
    The key to this is being able to see the devices on the network and ensuring that IoT products are segmented so they can’t serve as a gateway to a bigger, more extensive attack.
    “We live in a business world where IoT rightly opens up new business opportunities which should be embraced.  However, businesses need to know what and why something connected into their digital processes,” said Day.
    “Businesses need to be able to identify new IoT devices, outline what normal looks like to define what it should connect with – the segmentation part – and of course also monitor to check it does as it is predicted, to recognise any threats or risk,” he added.
    READ MORE ON CYBERSECURITY More

  • in

    Half of US citizens would share medical data beyond COVID-19 despite surveillance state worries

    Over half of US citizens are estimated to be willing to share their medical data and records due to COVID-19, and beyond, but fears of a surveillance state remain. 

    As the number of confirmed novel coronavirus edges close to 30 million worldwide, governments are seeking ways if not to eradicate infections, at least mitigate their impact on existing medical systems and reduce the pressure felt by hospitals to deal with the most severe cases. 
    One of the methods proposed is contact tracing, a concept based on individuals providing their details to places they visit — such as pubs or restaurants — as well as downloading mobile apps that automatically alert users if they have been in contact with a confirmed COVID-19 case. 
    Mobile-app based track-and-trace systems are at varying levels of development; Protect Scotland has recently rolled out and EU states have begun testing a region-wide interoperability gateway, whereas the UK’s promised “world-beating” system is a shambles.
    See also: Google wants to make it easier to analyse health data in the cloud
    These types of apps may be able to track the spread of COVID-19 throughout a population, but privacy remains a concern, especially if user mobile and location data end up in centralized servers able to be accessed by government agencies for purposes other than curbing the pandemic. 
    However, in the United States, at least, many are willing to try them out for the common good. 
    On Wednesday, Virtru published the results of a study exploring US attitudes on contact tracing and the release of their medical records in the fight against COVID-19.  
    The research is based on a survey conducted by The Harris Poll for Virtru in July and contains the responses of over 2,000 US citizens aged 18 and over. 
    In total, just over half of US citizens — 52% — said they were willing to share their medical records, even beyond COVID-19, with government agencies if this would help the pandemic response and healthcare in general. If they are given control over access to their own information and are able to block access or delete data at any time, 61% would be willing to do so. 
    However, when it comes to the information harvested from contact tracing apps, such as location and user data, 42% of survey respondents were confident in their privacy being respected. 
    CNET: Razer leak exposes thousands of customers’ private data
    In total, the most confidence is felt in tracing apps provided by healthcare providers and technology companies, with 34% and 28% of respondents saying they would trust them, respectively. 
    However, 58% are not confident when it comes to state and technology vendor-based app security and privacy. The idea of a “surveillance state” is in the mind of many, too, due to the US’ well-known mass surveillance programs, FISA, bulk data collection, and attempts to force technology providers to deliberately install backdoors into encrypted services. 
    In total, 62% of participants cited these issues as a potential barrier to their willingness in sharing health records beyond COVID-19 test results with government agencies. Overall, 31% of respondents said the government’s attitude on surveillance has a “major impact” on their willingness to share sensitive medical information. 
    TechRepublic: Top 10 antivirus software options for security-conscious users
    “As we continue to battle the pandemic, and at a time when trust in each other and institutions is most critical, we’re living in a massive trust deficit,” said Virtru CEO John Ackerly. “While we all love the convenience and access technology has afforded us, our personal information has become an economic engine and even a weapon, and as a result, we have very little control over it. So when we’re asked to give our most sensitive health information over to someone else, it’s understandable to fear that the data may be used and shared beyond what is asked.”
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Grab must review data policies following security breaches

    Grab must reassess its cybersecurity framework, especially after the mobile app platform reported a series of breaches that compromised its customers’ data. The latest security incident has prompted Singapore’s Personal Data Protection Commission (PDPC) to impose a fine of SG$10,000 ($7,325) and order a review of the company’s data protection policies within 120 days. 
    The August 30, 2019, breach came to light when Grab informed the PDPC that changes it made to its mobile app had resulted in the unauthorised access of its drivers. Further investigations later revealed that personal information of 21,541 GrabHitch drivers and passengers was exposed to the risk of unauthorised access, including vehicle numbers, passenger names, and e-wallet balance comprising a history of ride payments. 
    Grab had deployed an update to plug a potential vulnerability in its API (application programming interface), but this resulted in the data breach. 

    In its report, the PDPC noted that Grab had made changes to its systems without ensuring “reasonable security arrangements” were put in place to prevent any compromise of personal datasets. The lack of sufficiently robust processes to manage changes to its IT systems was a “particularly grave error” since it was the second time the vendor had made a similar mistake, with the first affecting a different system. 
    The commission noted that Grab had made changes to its app without understanding how such modifications would operate with existing features of its app and its broader IT system. 
    It also did not conduct proper scoping tests before deploying updates to its app, the PDPC said, noting that organisations were obliged to do so before introducing new IT features or changes to their systems. “These tests need to mimic real-world usage, including foreseeable scenarios in a normal operating environment when the changes are introduced. Such tests prior to deployment are critical to enable organisations to detect and rectify errors in the new IT features and/or be alerted to any unintended effects from changes that may put personal data at risk,” the commission said. 
    It added that Grab had admitted it did not conduct tests to simulate multiple users accessing its app or specific tests to verify how the caching mechanism — which was the component that resulted in the breach — would work in tandem with the update.
    Underscoring the fact that the company now had breached Section 24 in Singapore’s PDPA four times, the PDPC said this was “significant cause for concern” especially given Grab’s business involved processing large volumes of personal data on a daily basis. Section 24 outlines the need for organisations to protect personal data in its possession or under its control by making “reasonable security arrangements” to prevent unauthorised access, collection, use, disclosure, copying, modification, or similar risks.
    Singapore-based Grab, which started out as a ride-sharing operator, now offers a service portfolio that includes food delivery, digital payments, and insurance. It also announced its bid for a digital bank licence, alongside partner Singtel, in Singapore, where both companies would target “digital-first” consumers and small and midsize businesses. The partnership would lead to a joint entity, in which Grab would own a 60% stake. Grab has operations across eight Asia-Pacific markets including Indonesia, Malaysia, Thailand, and Vietnam.
    In addition to the fine, the PDPC also instructed Grab to put it place a “data protection by design policy” for its mobile applications within 120 days, in order to reduce the risk of another data breach.
    ZDNet asked Grab several questions including specific areas the company planned to review, security policies it put in place following the initial breach, and steps it had taken to ensure security was built into its various processes as the company introduced new services in recent years.
    It did not respond to any of these questions and, instead, replied with a statement it had previously released: “The security of data and the privacy of our users is of utmost importance to us and we are sorry for disappointing them. When the incident was discovered on August 30, 2019, we took immediate actions to safeguard our users’ data and self-reported it to the PDPC. To prevent a recurrence, we have since introduced more robust processes, especially pertaining to our IT environment testing, along with updated governance procedures and an architecture review of our legacy application and source codes.”
    Data policy in need of “serious review”
    That it violated the PDPA four times since 2018, seemed to indicate Grab was in need of a “serious review”, noted Ian Hall, Synopsys Software Integrity Group’s Asia-Pacific manager of client services. In particular, the company should assess its release processes, where required testing and checkpoints must be passed before the release of its app.
    Citing a study by Enterprise Strategy Group, he noted that it was common for vulnerable codes to be moved to production, typically due to a company’s need to meet deadlines. 

    Aaron Bugal, Sophos’ global solutions engineer, concurred, noting that Grab’s brushes with security was “a classic example” of an organisation that was rapidly expanding, but not scaling their security policies and technical controls proportionately. “Given this is another issue with its application on mobile devices, it would be wise to look at a third-party service that evaluates the security of the app before its release,” Bugal told ZDNet in an email interview.
    Asked if it was challenging for companies such as Grab, which had rapidly expanded their service portfolio, to ensure security remained robust, Hall said it certainly would be more difficult to maintain increasingly complex apps that covered a wide range of functionalities. 
    He explained that certain legacy code sections might not be updated as frequently as newer codes and, at the same time, newer codes also might introduce new vulnerabilities. 
    “Developers may tend to focus their efforts on newer codes and going back to fix a vulnerability in the legacy code portions may be more difficult,” he said. “This is why it is always better to find and fix issues earlier in the development lifecycle and for security tools to be well integrated to development processes.”
    Bugal noted that more customer data would be captured as organisations grew their business, and security measures should scale alongside the app and data collected. 
    He added that any changes to a company’s operational model should incorporate a security architecture from the conceptual stages. “This is not something that’s retrospectively bolted on, or thought of, once the changes are released,” he said.
    According to Hall, developers often inadvertently introduced vulnerabilities because they were not security experts. He noted that some of the most common vulnerabilities emerged from improper use of Google’s Android or Apple’s iOS mobile platforms, insecure data storage, and insecure communication. 
    Bugal added that several organisations also used outdated development tools and would not implement services that evaluated the libraries and shared code that many applications used as a base. “These can sometimes introduce vulnerabilities into an application through no fault of the application developer,” he explained. “Using modernised development environments and including security designs and evaluations of applications during the formative and release phases are integral to better security.”
    He noted that changes to mobile apps typically were automatically accepted by app store fronts and applied to mobile devices upon their release, leaving mobile consumers “at the mercy of the developer to do the right thing” with regards to application design and overall security. 
    “As consumers, we should understand what data an organisation is collecting, how they store it, and understand the risk if that data was to ever leak,” he said. 
    Hall added: “I would recommend users of mobile and other devices keep both their apps and operating systems updated. Also, use apps and providing personal details only to companies and apps that you trust. On the Android platform, we can disable particular permissions on apps that should not have access to them.”
    RELATED COVERAGE More

  • in

    Adobe out-of-band patch released to tackle Media Encoder vulnerabilities

    Adobe has released an out-of-band patch to resolve a trio of vulnerabilities discovered in Media Encoder.

    Adobe Media Encoder, software used to encode audio and video in different formats, is the sole subject of the security update issued outside of the company’s usual monthly release.
    On Tuesday, Adobe said that three vulnerabilities — CVE-2020-9739, CVE-2020-9744, and CVE-2020-9745 — are out-of-bound read security flaws “that could lead to information disclosure in the context of the current user.”
    See also: Adobe Experience Manager, InDesign, Framemaker receive fixes for critical bugs in new update
    Reported to Adobe by cybersecurity researcher Radu Motspan, the bugs are deemed “important” and impact Adobe Media Encoder version 14.4 on Windows and Mac machines. 
    However, each vulnerability has only been awarded a priority rating of 3, which Adobe says means the software at hand has “historically not been a target for attackers.”
    CNET: Razer leak exposes thousands of customers’ private data
    As always, it is recommended that users accept automatic software updates to patch their builds to stay protected. 
    Last week, the software giant released its September security patch update, tackling vulnerabilities in Adobe Experience Manager, InDesign, and Framemaker.
    TechRepublic: Top 10 antivirus software options for security-conscious users
    Critical and important vulnerabilities in the products were resolved, including cross-site scripting (XSS) issues, memory corruption bugs, and security issues leading to arbitrary code execution, including those within a browser session.
    In related news, on Tuesday, Adobe reported third-quarter financial results that beat analyst expectations. Adobe reported profits of $955 million, or $1.97 a share, and non-GAAP EPS of $2.41 on revenue of $3.16 billion.  
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    New MrbMiner malware has infected thousands of MSSQL databases

    Image: Caroline Grondin, Microsoft, ZDNet

    A new malware gang has made a name for itself over the past few months by hacking into Microsoft SQL Servers (MSSQL) and installing a crypto-miner.
    Thousands of MSSQL databases have been infected so far, according to the cybersecurity arm of Chinese tech giant Tencent.
    In a report published earlier this month, Tencent Security has named this new malware gang MrbMiner, after one of the domains used by the group to host their malware.
    The Chinese company says the botnet has exclusively spread by scanning the internet for MSSQL servers and then performing brute-force attacks by repeatedly trying the admin account with various weak passwords.
    Once the attackers gained a foothold on a system, they downloaded an initial assm.exe file, which they used to establish a (re)boot persistence mechanism and to add a backdoor account for future access. Tencent says this account uses the username “Default” and a password of “@fg125kjnhn987.”
    The last step of the infection process was to connect to the command and control server and download an app that mines the Monero (XMR) cryptocurrency by abusing local server resources and generating XMR coins into accounts controlled by the attackers.
    Linux and ARM variants also discovered
    Tencent Security says that while they saw only infections on MSSQL servers, the MrbMiner C&C server also contained versions of the group’s malware written to target Linux servers and ARM-based systems.
    After analyzing the Linux version of the MrbMiner malware, Tencent experts said they identified a Monero wallet where the malware generated funds.
    The address contained 3.38 XMR (~$300), suggesting that the Linux versions were also being actively distributed, although details about these attacks remain unknown for now.
    The Monero wallet used for the MbrMiner version deployed on MSSQL servers stored 7 XMR (~$630). While the two sums are small, crypto-mining gangs are known to use multiple wallets for their operations, and the group has most likely generated much larger profits.
    For now, what system administrators need to do is to scan their MSSQL servers for the presence of the Default/@fg125kjnhn987 backdoor account. In case they find systems with this account configured, full network audits are recommended. More

  • in

    Billions of devices vulnerable to new 'BLESA' Bluetooth security flaw

    Image: ZDNet

    Billions of smartphones, tablets, laptops, and IoT devices are using Bluetooth software stacks that are vulnerable to a new security flaw disclosed over the summer.
    Named BLESA (Bluetooth Low Energy Spoofing Attack), the vulnerability impacts devices running the Bluetooth Low Energy (BLE) protocol.
    BLE is a slimmer version of the original Bluetooth (Classic) standard but designed to conserve battery power while keeping Bluetooth connections alive as long as possible.
    Due to its battery-saving features, BLE has been massively adopted over the past decade, becoming a near-ubiquitous technology across almost all battery-powered devices.
    As a result of this broad adoption, security researchers and academics have also repeatedly probed BLE for security flaws across the years, often finding major issues.
    Academics studied the Bluetooth “reconnection” process
    However, the vast majority of all previous research on BLE security issues has almost exclusively focused on the pairing process and ignored large chunks of the BLE protocol.
    In a research project at Purdue University, a team of seven academics set out to investigate a section of the BLE protocol that plays a crucial role in day-to-day BLE operations but has rarely been analyzed for security issues.
    Their work focused on the “reconnection” process. This operation takes place after two BLE devices (the client and server) have authenticated each other during the pairing operation.
    Reconnections take place when Bluetooth devices move out of range and then move back into range again later. Normally, when reconnecting, the two BLE devices should check each other’s cryptographic keys negotiated during the pairing process, and reconnect and continue exchanging data via BLE.
    But the Purdue research team said it found that the official BLE specification didn’t contain strong-enough language to describe the reconnection process. As a result, two systemic issues have made their way into BLE software implementations, down the software supply-chain:
    The authentication during the device reconnection is optional instead of mandatory.
    The authentication can potentially be circumvented if the user’s device fails to enforce the IoT device to authenticate the communicated data.
    These two issues leave the door open for a BLESA attack — during which a nearby attacker bypasses reconnection verifications and sends spoofed data to a BLE device with incorrect information, and induce human operators and automated processes into making erroneous decisions. See a trivial demo of a BLESA attack below.
    [embedded content]
    Several BLE software stacks impacted
    However, despite the vague language, the issue has not made it into all BLE real-world implementations.
    Purdue researchers said they analyzed multiple software stacks that have been used to support BLE communications on various operating systems.
    Researchers found that BlueZ (Linux-based IoT devices), Fluoride (Android), and the iOS BLE stack were all vulnerable to BLESA attacks, while the BLE stack in Windows devices was immune.
    “As of June 2020, while Apple has assigned the CVE-2020-9770 to the vulnerability and fixed it, the Android BLE implementation in our tested device (i.e., Google Pixel XL running Android 10) is still vulnerable,” researchers said in a paper published last month.
    As for Linux-based IoT devices, the BlueZ development team said it would deprecate the part of its code that opens devices to BLESA attacks, and, instead, use code that implements proper BLE reconnection procedures, immune to BLESA.
    Another patching hell
    Sadly, just like with all the previous Bluetooth bugs, patching all vulnerable devices will be a nightmare for system admins, and patching some devices might not be an option.
    Some resource-constrained IoT equipment that has been sold over the past decade and already deployed in the field today doesn’t come with a built-in update mechanism, meaning these devices will remain permanently unpatched.
    Defending against most Bluetooth attacks usually means pairing devices in controlled environments, but defending against BLESA is a much harder task, since the attack targets the more often-occurring reconnect operation.
    Attackers can use denial-of-service bugs to make Bluetooth connections go offline and trigger a reconnection operation on demand, and then execute a BLESA attack. Safeguarding BLE devices against disconnects and signal drops is impossible.
    Making matters worse, based on previous BLE usage statistics, the research team believes that the number of devices using the vulnerable BLE software stacks is in the billions.
    All of these devices are now at the mercy of their software suppliers, currently awaiting for a patch.
    Additional details about the BLESA attack are available in a paper titled “BLESA: Spoofing Attacks against Reconnections in Bluetooth Low Energy” [PDF, PDF]. The paper was presented at the USENIX WOOT 2020 conference in August. A recording of the Purdue team’s presentation is embedded below.
    [embedded content] More

  • in

    US charges two hackers for defacing US websites following Soleimani killing

    Image: Catalin Cimpanu

    Special feature

    Cyberwar and the Future of Cybersecurity
    Today’s security threats have expanded in scope and seriousness. There can now be millions — or even billions — of dollars at risk when information security isn’t handled properly.
    Read More

    The US Department of Justice has charged today two hackers with orchestrating a mass-defacement campaign against US websites following the killing of Iranian military general Qasem Soleimani by US forces earlier this year.
    According to an indictment unsealed today, the two hackers were identified as Behzad Mohammadzadeh (aka Mrb3hz4d), 19, from Iran, and Marwan Abusrour (aka Mrwn007), 25, from Palestine.
    Mohammadzadeh, considered the primary perpetrator of the attacks, was accused of breaking into at least 51 US websites and posting images of the late Soleimani and messages such as “Down with America.”
    The defacements primarily hit US-hosted domains and started on January 3, a day after US officials announced the killing of general Qasem Soleimani in a drone strike attack against his car near the Baghdad International Airport.
    According to the indictment, following this announcement, Mohammadzadeh began a wide-ranging hacking campaign.
    While the indictment accused Mohammadzadeh of defacing 51 websites, US officials say that a profile on Zone-H, a website where hackers often index and brag about their defacements, lists more than 1,100 websites defaced by the Iranian hacker, with 400 of these sites showing pro-Soleimani messages.

    Image: ZDNet
    In all of this, Abusrour was charged with a minor role. Prosecutors said that the young Palestinian provided Mohammadzadeh with access to seven websites that his Iranian counterpart later defaced part of his larger campaign.
    Nonetheless, US officials said that Abusrour also had a history in defacing websites, with his hacker monicker found on more than 337 websites defaced with pro-Palestinian messages, dating back to June 2016.
    The defacements executed by the two hackers received considerable media coverage earlier this year. However, the coverage was slightly over-hyped, with some news outlets calling these low-level hacks as the Iranian government’s response as part of an upcoming “nuclear cyber war.”
    Nothing of the sort happened, and the most high-profile websites hacked by Mohammadzadeh was the portal for the US Federal Depository Library Program, which was almost immediately taken down and restored following the defacement.
    The defacements, although on the lower spectrum of cyber-attacks, are still illegal. The two hackers have now been charged and risk sentences of up to 10 years in prison and fines of up to $250,000, if found guilty, according to the DOJ.
    Both hackers remain at large. More

  • in

    Microsoft: Windows 10 is hardened with these fuzzing security tools – now they're open source

    Microsoft has released a new open-source security tool called Project OneFuzz, a testing framework for Azure that brings together multiple software security testing tools to automate the process of detecting crashes and bugs that could be security issues.
    Google’s open-source fuzzing bots have helped it detect thousands of bugs in its own software and other open-source software projects. Now Microsoft is releasing its answer to the same challenge for software developers. 

    Project OneFuzz is available on GitHub under an open-source MIT license like Microsoft’s other open-source projects, such as Visual Studio Code, .NET Core and the TypeScript programming language for JavaScript.
    Microsoft describes Project OneFuzz as an “extensible fuzz testing framework for Azure”. 
    Fuzzing essentially involves throwing random code at software until it crashes, potentially revealing security issues but also performance problems. 
    Google has been a major proponent of the technique, pushing coders and security researchers towards fuzzing utilities and techniques. Its open-source fuzzers include OSS-Fuzz and Cluster Fuzz. 
    OSS-Fuzz is available developers to download from GitHub and use on their own code. It’s also available as a cloud service for select open-source projects. 
    Microsoft previously announced that it would replace its existing software testing toolset known as Microsoft Security and Risk Detection with the automated, open-source fuzzing tool. 
    The Redmond company also says it’s solving a different and expensive challenge for all businesses that employ software developers, and gives credit to Google for pioneering the technology. 
    OneFuzz is the same testing framework Microsoft uses to probe Edge, Windows and other products at the company. It’s already helped Microsoft harden Windows 10, according to Microsoft.
    “Fuzz testing is a highly effective method for increasing the security and reliability of native code – it is the gold standard for finding and removing costly, exploitable security flaws,” said Microsoft Security’s Justin Campbell, a principal security software engineering lead, and Mike Walker, a senior director, special projects management. 
    “Traditionally, fuzz testing has been a double-edged sword for developers: mandated by the software-development lifecycle, highly effective in finding actionable flaws, yet very complicated to harness, execute, and extract information from. 
    “That complexity required dedicated security engineering teams to build and operate fuzz-testing capabilities making it very useful but expensive. Enabling developers to perform fuzz testing shifts the discovery of vulnerabilities to earlier in the development lifecycle and simultaneously frees security engineering teams to pursue proactive work.” 
    As Microsoft notes, “recent advancements in the compiler world, open-sourced in LLVM and pioneered by Google, have transformed the security engineering tasks involved in fuzz testing native code”. 
    These advances make it cheaper for developers to handle what was once attached and instead bake these processes into continuous build systems, according to Microsoft. This includes crash detection, which was previously attached via tools such as Electric Fence. Now they can be baked in with asan. 
    It also addresses previously attached tools such as iDNA, Dynamo Rio, and Pin that are now built in with sancov.
    “Input harnessing, once accomplished via custom I/O harnesses, can be baked in with libfuzzer’s LLVMFuzzerTestOneInput function prototype,” Campbell and Walker note. 
    Microsoft has also been adding experimental support for these features to Visual Studio so that test binaries can be built by a compiler, allowing developers to avoid the need to build them into a continuous integration (CI) or continuous development (CD) pipeline. It also helps developers scale fuzzing workloads in the cloud.   More