More stories

  • in

    Ransomware warning: Now attacks are stealing data as well as encrypting it

    There’s now an increasing chance of getting your data stolen, in addition to your network being encrypted, when you are hit with a ransomware attack – which means falling victim to this kind of malware is now even more dangerous.
    The prospect of being locked out of the network by cyber criminals is damaging enough, but by leaking stolen data, hackers are creating additional problems. Crooks use the stolen data as leverage, effectively trying to bully organisations who’ve become infected with ransomware into paying up – rather than trying to restore the network themselves – on the basis that if no ransom is paid, private information will be leaked.
    Ransomware groups like those behind Maze and Sodinokibi have already shown they’ll go ahead and publish private information if they’re not paid and now now the tactic is becoming increasingly common, with over one in ten attacks now coming blackmail in addition to extortion.
    Analysing numbers of submissions to ID Ransomware – a site that allows people to identify ransomware – researchers at Emsisoft found that of 100,000 submissions related to ransomware attacks between January and June this year, 11,642 involved ransomware families that overtly attempt to steal data – or just over 11 percent.
    Organisations in the legal, healthcare and financial sectors are among the most targeted by these campaigns, based on the assumption that they hold the most sensitive data.

    SEE: Cybersecurity: Let’s get tactical (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic)  
    And researchers warn that the percentage of ransomware attacks which steal data could be even higher, because some will do it discreetly, potentially using the stolen information as the basis for additional attacks.
    “All ransomware groups have the ability to exfiltrate data. While some groups overtly steal data and use the threat of its release as additional leverage to extort payment, other groups likely covertly steal it,” said the blog post by researchers.
    “While groups that steal covertly may not exfiltrate as much data as groups seeking to use it as leverage, they may well extract any data that has an obvious and significant market value or which can be used to attack other organizations”.
    The prospect of suffering a data breach in addition to a ransomware attack is worrying for organisations because even if the network is restored, the leak can cause other problems with customers or regulators.
    Exfiltration and encryption attacks will become increasingly standard practice and both the risks and the costs associated with ransomware incidents will continue to increase, warned researchers.
    However, it’s possible for organisations to avoid falling victim to ransomware in the first place – or at least limiting the damage it can do – by following some cybersecurity hygiene basics.
    They include applying security patches to protect against known vulnerabilities, and disabling remote ports where they’re not needed and segmenting the network to stop ransomware from getting in, or being able to spread quickly around the network if it does. Organisations should also use multi-factor authentication so even if passwords are known, they can’t be used to gain access to other areas of the network.
    Back-ups should be regularly made and stored offline, while organisations should also have a plan for that they’ll do in the event of ransomware compromising the network.
    READ MORE ON CYBERSECURITY More

  • in

    Google Cloud steps up privacy, security with Confidential VMs and Assured Workloads

    Google Cloud on Tuesday announced two new security offerings designed for customers with highly-regulated or sensitive data that requires extra protection in the cloud. The first, Confidential VMs, is the initial product in Google’s Confidential Computing portfolio, which promises to let customers keep data encrypted while in use. The second, Assured Workloads for Government, allows customers to configure workloads in a way that meets strict compliance requirements, without having to rely on a siloed “government cloud.”  

    The new tools are chiefly designed for industries with stringent security needs, such as the public sector, health care, and financial services. However, executives stressed that Confidential VMs and Assured Workloads for Government are tools that represent structural changes to the entire Google Cloud Platform, rather than simply bolted-on capabilities. 
    “That’s one of the reasons we believe this is a foundational differentiator for Google Cloud in these regulated markets,” Sunil Potti, Google Cloud’s GM and VP of security, said to reporters.
    Confidential Computing is a “game-changing technology,” Potti said. “It’s almost like the last bastion of sensitive data that can now be unlocked to leverage the full power of the cloud.” 
    For example, Potti said, many financial services firms keep their most sensitive IP around algorithmic trading on-premise because of sensitivities around data processing. Those concerns could be relieved with confidential computing. 

    Google Cloud already encrypts data at rest and in transit. Confidential VMs, currently in beta, offer memory encryption to keep workloads isolated. They’re based on a foundational new technology powered by a combination of Google’s software IP with AMD hardware. After working closely with AMD to ensure memory encryption wouldn’t significantly interfere with workload performance, Google says the performance metrics of Confidential VMs are close to those of non-confidential VMs. 
    Confidential VMs take advantage of the Secure Encrypted Virtualization (SEV) supported by 2nd Gen AMD Epyc CPUs. Data stays encrypted while it is used, indexed, queried, or trained on. Encryption keys are generated in hardware, per VM, and they are not exportable.
    The primary benefit of using AMD CPUs, Potti said, is that customers don’t have to recompile their applications to take advantage of Confidential VMs. All GCP workloads already running in VMs can run as a Confidential VM — customers just need to check a box. 
    “When we canvassed our customers, that was the biggest feedback we got,” he said. “You don’t want to forklift and redesign your apps. You literally lift and shift your workloads over.”
    The precursor to Confidential VMs was Shielded VMs, virtual machines hardened by a set of security controls that help defend against rootkits and bootkits. Earlier this year, Google made Shielded VMs the default setting for GCP users — and Google expects to eventually do the same for Confidential VMs, Potti said. 
    Google, Potti said, is “the first major cloud provider to offer this level of security and isolation while giving customers a simple and easy option for new and existing workloads.” 

    Google, however, is one of several technology companies working to make the cloud more secure via confidential computing. The company was among the first to join the Confidential Computing Consortium (CCC), a project launched last year by the Linux Foundation. Other members include Microsoft, IBM, Alibaba, and Intel. Microsoft earlier this year expanded access to VMs that leverage trusted execution environments (TEEs), which secure portions of compute and memory to protect data in use. Meanwhile, IBM earlier this year released an open-source toolkit to let developers experiment with fully homomorphic encryption (FHE), a nascent technology that allows data to remain encrypted while in use. 
    Assured Workloads for Government  
    On the compliance front, Google on Tuesday introduced Assured Workloads for Government. This new tool enables compliance professionals to more easily create controlled environments where US data location and personnel access controls are automatically enforced. The personnel access controls limit which Google support employees can access your data, based on factors such as citizenship, geographical access location, or background checks. 
    The service meets the security and compliance standards required by the Defense Department (DoD IL4), the FBI’s Criminal Justice Information Services Division (CJIS), and the Federal Risk and Authorization Management Program (FedRAMP). The tool is currently available in private beta in US cloud regions only. 
    Typically, to meet government compliance requirements, organizations have to use separate environments known as government clouds, which may not offer the same features as standard commercial clouds. Jeanette Manfra, Google Cloud’s global director of security and compliance, called that a “legacy mindset” that Google is trying to move past. 
    “Our approach is to make the entire commercial cloud a secure and protected one that works just as well for the public sector as it does for the private sector,” she said. 

    Google Cloud More

  • in

    Need a cheap camera cover for your Mac that won't break your display or the bank?

    So, Apple recommends that you don’t close the lid your MacBook if it has a camera cover fitted because it can break your display, and instead of using a camera cover, Apple says you should use the green glowing light as a privacy indicator.
    There are valid reasons for this, from preventing screen damage to making sure that it doesn’t interfere with features like automatic brightness and True Tone.
    But some people want a camera cover, for a number of very valid reasons.
    What to do?
    The problem with trying to recommend a camera cover is compatibility. Newer MacBooks are built to much tighter tolerances than older models and fitting an incompatible camera cover could cause your display to make an expensive, and sadly permanent, “pop.”

    Buried in Apple’s support post is the following recommendation:
    “If you install a camera cover that is thicker than 0.1mm, remove the camera cover before closing your computer.”
    Must read: The real reason Apple is warning users about MacBook camera covers

    What’s no thicker than 0.1mm (0.004inch), comes with low-tack, no-residue adhesive pre-applied, and costs virtually nothing?
    The humble canary yellow Post-It Notes.
    They’re thin (about 0.074mm), the adhesive doesn’t leave a residue, they’re cheap, and they’re found in abundance in offices and home offices.
    I’ve tested them on the new 16-inch MacBook Pro with its super-tight tolerances, and it doesn’t damage the screen. Just make sure that the Post-It Note doesn’t overlap the bezel, only the camera, when you apply it.
    I recommend using genuine 3M Post-It Notes, and not the myriad of knock-offs that are available, because that way you’re guaranteed of the tolerances and quality of the adhesive. I also recommend using the yellow Post-It Notes because different colors are a little thicker (although I’ve not found a color that exceeds the 0.1mm limit).
    Cheap, safe, reusable, and pretty much limitless. More

  • in

    EFF’s new database reveals what tech local police are using to spy on you

    The Electronic Frontier Foundation (EFF) has debuted a new database that reveals how, and where, law enforcement is using surveillance technology in policing strategies. 

    Launched on Monday in partnership with the University of Nevada’s Reynolds School of Journalism, the “Atlas of Surveillance” is described as the “largest-ever collection of searchable data on police use of surveillance technologies.”
    The civil rights and privacy organization says the database was developed to help the general public learn about the accelerating adoption and use of surveillance technologies by law enforcement agencies. 
    See also: KingComposer patches XSS flaw impacting 100,000 WordPress websites
    The map pulls together thousands of data points from over 3,000 police departments across the United States. Users can zoom in to different locations and find summaries of what technologies are in use, by what department, and track how adoption is spreading geographically. 

    Atlas of Surveillance also highlights specific technologies including body-worn cameras, drones, automated license plate readers, facial recognition, Ring partnerships, and predictive policing, in which data is used to ‘predict’ where and how crimes are likely to take place. 

    CNET: Google targets stalkerware in updated ad policy
    It is also possible to directly search the data to investigate local police departments, including what has been adopted in your area and any surveillance-related grants or awards they have received in the past.
    “Atlas of Surveillance documents the alarming increase in the use of unchecked high-tech tools that collect biometric records, photos, and videos of people in their communities, locate and track them via their cell phones, and purport to predict where crimes will be committed,” the EFF says.
    For example, according to EFF’s datasets, ShotSpotter gunshot detection technology has proven popular in many states. The solution uses a combination of sensors, algorithms, and machine learning (ML) to detect and alert law enforcement to gunfire in urban areas.
    TechRepublic: Software-defined perimeters may be the solution to remote work security concerns
    The map has been built based on crowdsourced data and journalism over the past 18 months, including news articles, government meeting notes, press releases, and social media content.
    Users are also able to submit new data points for inclusion. 
    “The prevalence of surveillance technologies in our society provides many challenges related to privacy and freedom of expression, but it’s one thing to know that in theory, and another to see hard data laid out on a map,” Reynolds School Professor and Director of the Center for Advanced Media Studies Gi Yun commented.
    Update 17.24 BST: EFF told ZDNet that there are no plans yet to expand the map, “since we’ve only done a fraction of the US and other countries may require other methodologies.” However, the rights group is open to the idea in the future.

    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Remote workers in Singapore aware of security rules, but still break them anyway

    More than half of workers in Singapore have their company’s cybersecurity policies in mind whilst working remotely amidst the COVID-19 pandemic, but several still break the rules anyway. Some 38% admit to connecting to public Wi-Fi networks without using their corporate VPN (virtual private network) application, while 37% have uploaded corporate data on to non-work applications.
    Some 59% of employees working remotely in the country said they were “more conscious” of their organisation’s security policies when Singapore adopted strict safe distancing rules during its “circuit breaker” period. However, many still broke the rules anyway due to limited understanding or resource constraints, according to a Trend Micro survey, which polled 502 respondents in Singapore. The study was part of a global report to assess remote workers’ treatment of corporate cybersecurity and IT policies. 

    The Singapore findings revealed that 89% of remote workers said they took instructions from their IT team seriously and 87% recognised their role in keeping their organisation secure. Another 71% were aware using non-work applications on a corporate device posed a security risk. 
    Such awareness, however, failed to materialise into actual behaviour. For instance, 39% said they often or always accessed corporate data using a non-work device. Another 16% were likely to click on a link offering free services, such as extra cloud storage and speedier internet connectivity, even when these were from unknown email addresses.
    Some 52% admitted to downloading or using non-work applications on a corporate device, including 35% who did so without prior permission from their IT team. And 37% of Singapore respondents had uploaded corporate data to non-work applications. 

    Trend Micro warned that cybercriminals were looking to exploit such practices to breach enterprises. Phishing attacks, for instance, remained a hot tool amongst malicious hackers, with the number of such attacks in Singapore climbing from 16,100 in 2018 to 47,500 last year. These stats were from a recent report by Cyber Security Agency of Singapore, which revealed that cybercrime accounted for 26.8% of all crimes in the nation last year.
    Trend Micro’s Southeast Asia and India vice president, Nilesh Jain, said: “It is encouraging to see a majority of Singaporean employees recognising their role as the human firewall of their company. To close the cyber risk gap, especially caused by people who are either unaware of security policies or even those who think they are above the rules, organisations should not only provide training but take an opportunity to add guardrails and controls while understanding the users’ needs. 
    “Using a combination of both in a positive and easy-to-use fashion will hopefully encourage behavioural change and understanding,” Jain said. 
    Citing Edge Hill University’s cyberpsychology academic Linda K. Kaye, the report noted that there were significant individual differences across the workforce. “This can include individual employee’s values, accountability within their organisation, as well as aspects of their personalit — all of which are important factors that drive people’s behaviours,” Kaye said. “To develop more effective cybersecurity training and practices, more attention should be paid to these factors. This, in turn, can help organisations adopt more tailored or bespoke cybersecurity training with their employees, which may be more effective.”
    RELATED COVERAGE More

  • in

    'Booyaaa': Australian Federal Police use of Clearview AI detailed

    Earlier this year, the Australian Federal Police (AFP) admitted to using a facial recognition tool, despite not having an appropriate legislative framework in place, to help counter child exploitation.
    The tool was Clearview AI, a controversial New York-based startup that has scraped social media networks for people’s photos and created one of the biggest facial recognition databases in the world. It provides facial recognition software, marketed primarily at law enforcement.
    The AFP previously said while it did not adopt the facial recognition platform Clearview AI as an enterprise product and had not entered into any formal procurement arrangements with the company, it did use a trial version.
    Documents published by the AFP under the Freedom of Information Act 1982 confirmed that the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) registered for a free trial of the Clearview AI facial recognition tool and conducted a pilot of the system from 2 November 2019 to 22 January 2020.
    The ACCCE’s Covert Online Engagement (COE) team and the Child Protection Triage Unit (CPTU) used Clearview to attempt to find an offender with no result. The AFP said that staff used the facial recognition tool to check the accuracy and effectiveness of its algorithm, however.

    “Clearview is like Google Search for faces. Just upload a photo to the app and instantly get results from mug shots, social media, and other publicly available sources,” an email to an AFP staff member from “Team Clearview” says.
    Another email encourages the user to “search a lot” as the account has unlimited searches. Team Clearview tells the user not to stop at one search, rather to “see if you can reach 100 searches” as “it’s a numbers game”.
    The email continues to tell the user to refer their colleagues, as “the more people that search, the more success”.
    This approach from Clearview mirrors that used on Victoria Police
    Clearview AI founder, Australian entrepreneur Hoan Ton-That, also reached out directly to one AFP staff member, who in response said, “We’ve only just started using it and so far it has been valuable”.
    The AFP said previously its trial saw nine invitations sent from Clearview AI to AFP officers to register for a free trial, with seven officers activating the trial and conducting searches.
    Documents show an AFP officer telling colleagues that they ran someone’s mugshot through the Clearview system and “got a hit from his Instagram account”.
    Responses included “Nice work” and “Booyaaa! Luv it!”.
    Further email exchanges between the AFP’s staff reveal that one staff member was aware the tool was potentially not approved for use. In response, one staff member said she was running the app off her personal phone, while another asked if any concerns had been raised by the team responsible for infosec.
    With Clearview AI in February suffering a data breach that exposed its customer list, the number of accounts each customer has, and the number of searches those customers have made, the AFP sent an email asking those in receipt of the memo change their AFPNET password immediately.
    After media reports emerged that the AFP was using the software, one staff member suggested that they cease using it “since everyone is raising the issue of approval”.
    Last week, the UK Information Commissioner’s Office and Office of the Australian Information Commissioner (OAIC) announced they would be teaming up to conduct a joint investigation into Clearview AI.
    Prior to this, the OAIC in April asked questions of the company and issued a notice to produce under section 44 of the Australian Privacy Act. The OAIC also reached out to the AFP in May, asking the agency what it used the platform for and directed the AFP to cease use of the product.
    Following the Clearview ban, staff emails between those working in the agency’s Victim Identification Team indicate that not having access to the tool makes “things difficult”.
    RELATED COVERAGE
    Victoria Police emails reveal Clearview AI’s dodgy direct marketing
    Why bother with messy official approvals, tedious legal and privacy assessments, or even ethics when cops use facial recognition? ‘Feel free to run wild with your searches,’ says Clearview.
    AFP used Clearview AI facial recognition software to counter child exploitation
    Seven officers have conducted searches on the Clearview AI facial recognition platform.
    ACLU sues Clearview AI claiming the company’s tech crosses ethical bounds
    The American Civil Liberties Union has accused Clearview AI’s biometric platform of creating a nightmare scenario that many have long feared. More

  • in

    RECON bug lets hackers create admin accounts on SAP servers

    Business giant SAP released a patch today for a major vulnerability that impacts the vast majority of its customers. The bug, codenamed RECON, exposes companies to easy hacks, according to cloud security firm Onapsis, who discovered the vulnerability earlier this year, in May, and reported it to SAP to have it patched.
    Onapsis says RECON allows malicious threat actors to create an SAP user account with maximum privileges on SAP applications exposed on the internet, granting attackers full control over the hacked companies’ SAP resources.
    Bug impacts many major SAP apps
    The vulnerability is easy to exploit and resides in a default component included in every SAP application running the SAP NetWeaver Java technology stack — namely in the LM Configuration Wizard component part of the SAP NetWeaver Application Server (AS).
    The component is used in some of SAP’s most popular products, including SAP S/4HANA, SAP SCM, SAP CRM, SAP CRM, SAP Enterprise Portal, and SAP Solution Manager (SolMan).
    Other SAP applications running the SAP NetWeaver Java technology stack are also impacted. Onapsis estimates the number of affected companies at around 40,000 SAP customers; however, not all of them expose the vulnerable application directly on the internet.

    Onapsis says a scan they carried out discovered around 2,500 SAP systems directly exposed to the internet that are currently vulnerable to the RECON bug.
    A “severity 10” bug
    The urgency around applying this patch is warranted. Onapsis said the RECON bug is one of those rare vulnerabilities that received a maximum 10 out of 10 rating on the CVSSv3 vulnerability severity scale.
    The 10 score means the bug is easy to exploit, as it doesn’t involve technical knowledge; can be automated for remote attacks over the internet; and doesn’t require the attacker have an account on an SAP app already or valid credentials.
    Coincidentally, this is the third major CVSS 10/10 bug disclosed in the last few weeks. Similar critical bugs were also disclosed in PAN-OS (the operating system for Palo Alto Networks firewalls and VPN devices) and in F5’s BIG-IP traffic shaping server (one of the most popular networking devices today).
    Furthermore, it’s also been a rough patch for the enterprise sector, with similarly bad vulnerabilities disclosed in Oracle, Citrix, and Juniper devices; all bugs with high severity ratings, and being easy to exploit.
    Many of these vulnerabilities have already come under fire and are being exploited by hackers, such as the PAN-OS, F5, and Citrix bugs.
    Administrators of SAP systems are advised to apply SAP’s patches as soon as possible, as Onapsis warned that the bug could let hackers take full control of a company’s SAP applications and then steal proprietary technology and user data from internal systems.
    SAP patches will be listed and available on the company’s security portal in the next few hours.
    The Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (DHS CISA) has also issued a security alert today urging companies to deploy the patches as soon as possible.
    RECON is also tracked as CVE-2020-6287. More

  • in

    A hacker is selling details of 142 million MGM hotel guests on the dark web

    Image: Thomas Haas
    The MGM Resorts 2019 data breach is much larger than initially reported, and is now believed to have impacted more than 142 million hotel guests, and not just the 10.6 million that ZDNet initially reported back in February 2020.
    The new finding came to light over the weekend after a hacker put up for sale the hotel’s data in an ad published on a dark web cybercrime marketplace.
    According to the ad, the hacker is selling the details of 142,479,937 MGM hotel guests for a price just over $2,900.

    Image: ZDNet
    The hacker claims to have obtained the hotel’s data after they breached DataViper, a data leak monitoring service operated by Night Lion Security.
    Vinny Troia, founder of Night Lion Security, told ZDNet in a phone call that his company never owned a copy of the full MGM database and that the hackers are merely trying to ruin his company’s reputation.
    MGM says it notified all impacted users

    Reached out for comment on Sunday, MGM Reports issued a statement claiming they were aware of the scope of the breach.
    The MGM breach occurred in the summer of 2019 when a hacker gained access to one of the hotel’s cloud servers and stole information on the hotel’s past guests.
    MGM learned of the incident last year, but never made the security breach public, but notified impacted customers, according to local data breach notification laws.
    The security breach came to light in February 2020 after a batch of 10.6 million MGM hotel guests’ data was offered as a free download on a hacking forum. At the time, MGM admitted to suffering a security breach, but the company didn’t disclose the full breadth of the intrusion.
    “MGM Resorts was aware of the scope of this previously reported incident from last summer and has already addressed the situation,” an MGM spokesperson told ZDNet in an email today, referring to the company’s efforts to notify impacted users.
    An MGM spokesperson also pointed out that “the vast majority of data consisted of contact information like names, postal addresses, and email addresses.”
    Financial information, ID or Social Security numbers, and reservation (hotel stay) details were not included, MGM said in February, which ZDNet is able to confirm after reviewing two different batches of MGM data — the 10.6 million user records leaked in February and a newer 20 million batch shared by the hackers on Sunday.
    Dates of birth and phone numbers were also included, which is how we were able to confirm the breach in the first place, by contacting past hotel guests.
    Bigger than 142 million?
    However, the MGM data could be even bigger than the 142 million count we have today.
    Irina Nesterovsky, Head of Research at threat intel firm KELA, told ZDNet back in February that the MGM data had been circulating and was being sold in private hacking circles since at least July 2019.
    Posts on Russian-speaking hacking forums promoted the MGM data breach as containing details on more than 200 million hotel guests.

    Image: KELA (supplied)
    Article updated to shortly after publication to clarify language. More