More stories

  • in

    Privacy predictions for Europe in 2022

    Here are some of Forrester’s most important predictions that will impact European privacy leaders’ planning for 2022: Employee backlash will grow as more employers monitor productivity 

    In October 2020, almost one in three European employees said that their employers used software to monitor their productivity while working from home. Today, as companies launch new flexible work policies, software that allows employers to monitor employees’ productivity is gaining popularity worldwide. Companies that choose to deploy this technology today must prepare to manage the consequences in the next 12 months. Privacy regulators are already acting, and more action will happen in 2022 According to the General Data Protection Regulation enforcement tracker, fines and penalties for violations of an employee’s privacy are in the top five for total highest values. Across the top 10 single, highest fines issued so far, the violation of an employee’s privacy accounts for two of them. Regulators are investigating a variety of employee surveillance methods. In the case of retailer H&M, the regulator found that the employer systematically built and kept excessive and overly exposed records concerning employee personal and professional life. In the case of notebooksbilliger.de, the regulator concluded that the company recorded videos of its employees for an extended period of time without the appropriate legal basis. In the case of IKEA retail France, the company’s former CEO was served with a suspended, two-year prison sentence as part of the investigation against the brand for excessive and unlawful staff surveillance and data collection. Tattleware has become the newest method of employee monitoring. Regulators, take note. Employees will increasingly feel mistrusted and concerned Employee backlash will grow as employers attempt to monitor how often employees click, what they click on, and when they are facing their computers. Underestimating employees’ privacy attitude is a mistake. When it comes to sharing their personal data, our research shows that over 40% of employees across the UK, France, and Germany are comfortable sharing with their employer only the minimum required by law. The same number of French employees worry that their employer is collecting too much of their personal information. And a staggering 57% of French, 46% of UK, and 44% of German employees wish that they had a higher degree of privacy protection in the workplace. Finally, if their employer breached their trust, employees described their feelings as “betrayed” and “upset.” Tattleware adoption will degrade the employee experience, productivity, and security 

    Feelings of betrayal and mistrust will have a negative impact on employees’ loyalty, engagement, and experience. Despite being an enormous risk, this is not the only one organizations face. Without adequate communication and transparent approaches, negative employee sentiment might also extend to other forms of workforce monitoring that have nothing to do with tattleware, such as insider threat programs. These programs, typically run by security teams to prevent exfiltration of sensitive data that often happens because of well-intentioned employees’ mistakes, will become more difficult to justify and adopt. Forrester predicts that, in response to increased regulatory scrutiny and more intense employee backlash against workforce monitoring, CISOs will reduce the scope of their insider threat programs — with adverse results. In fact, this will increase the company’s risk of insiders stealing data. Privacy, security, and employee experience professionals must act now to prevent business damage Privacy execs, CISOs, HR, and CIOs must join forces to ensure their workforce monitoring programs don’t damage their organization or their workforce’s productivity and engagement. They must strengthen the governance of their workforce monitoring activities, making sure they put in place clear and transparent communication with their employees, choose approaches that are never excessive or disproportionate, and ensure that they have the adequate legal basis in place before deploying any workforce monitoring technology. They must also work to educate their organization about the benefits of the program and ensure that employees understand the boundaries in place that prevent the disproportionate collecting, processing, and sharing of employees’ personal data. To understand all the major dynamics that will impact European businesses next year visit our Predictions 2022 hub. This post was written by Principal Analyst Enza Iannopollo and it originally appeared here.  More

  • in

    Almost half of rootkits are used for cyberattacks against government organizations

    Research into how rootkits are used by cybercriminals has revealed that close to half of campaigns are focused on compromising government systems. 

    On Wednesday, Positive Technologies released a report on the evolution and application of rootkits in cyberattacks, noting that 77% of rootkits are utilized for cyberespionage. Rootkits are used to obtain privileges in an infected system, either at the kernel level or based on user modes, the latter of which is used by many software applications. Some rootkits may also combine both capabilities.  Once a rootkit has hooked into a machine, it may be used to hijack a PC, intercept system calls, replace software and processes, and they may also be part of a wider exploit kit containing other modules such as keyloggers, data theft malware, and cryptocurrency miners — with the rootkit set to disguise malicious activity.  However, rootkits are difficult to develop and may take both time and expense to do so — and as a result, the majority of rootkit-based attacks are linked to advanced persistent threat (APT) groups that have the resources and skill to develop this form of malware.  The researchers’ analysis sample was made up of 16 malware types; 38% being kernel-mode rootkits, 31% user-mode, and 31% combination-type rootkits. The majority of which in use today are designed to attack Windows systems.  According to Positive Technologies, there appears to be a general trend to user-mode rootkits in the exploit industry due to the difficulty of creating kernel-mode variants, and despite improvements in defense against rootkits in modern machines, they are often still successful in cyberattacks. 

    “It takes a lot of time to develop or modify such a rootkit, and this can make working to time constraints difficult; you must be quick to exploit a vulnerability in a company’s perimeter before it is noticed and security updates are installed, or another group takes advantage of it,” Positive Technologies says. “Because of this attackers are used to acting quickly: it can take less than a day from the moment the exploit is identified to the first attempts to make use of it, and if a group does not have a reliable, ready-to-use tool, this time is clearly not enough to work on it.” In addition, the team says that any errors in the coding of a kernel-mode rootkit can lead to a machine’s destruction and permanent corruption, and so if a financial demand is being made — for example, by ransomware operators — then the harm caused would stop extortion attempts from being successful.  In 44% of cases documented since 2011, rootkits have been used to strike government agencies worldwide, followed by research and academic institutions in 38% of known campaigns.  Positive Technologies suggests that when rootkits are in play, their cost and development time require a high-value target: and in the majority of cases, the aim is data theft — although the goal is sometimes purely financial. In addition, rootkits are most often tracked to attacks against telecommunications companies, the manufacturing sector, and banks or financial services.  Rootkits may also be employed in targeted attacks against individuals, said to be “high-ranking officials, diplomats, and employees of victim organizations,” according to the researchers.  Commercially available rootkits often fetch a price of between $45,000 and $100,000, depending on the target operating system, terms of subscription, and features. 
    Positive Technologies
    “Despite the difficulties of developing such programs, every year we see the emergence of new versions of rootkits with a different operating mechanism to that of known malware,” commented Alexey Vishnyakov, Head of Malware Detection at the Positive Technologies Expert Security Center (PT ESC). “This indicates that cybercriminals are still developing tools to disguise malicious activity and coming up with new techniques for bypassing security — a new version of Windows appears, and malware developers immediately create rootkits for it. We expect rootkits to carry on being used by well-organized APT groups.” Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    SpotCam Video Doorbell 2: A Ring-killer this is not

    Dale Smith/CNET
    The
    SpotCam

    is very similar to
    the Ring

    , but there are some differences. Overall, it’s not quite as good. There are some nice options, but does that make for a recommended purchase? Well…

    Like all of the other competitors in the space, the SpotCam sends video to your phone when your doorbell is rung. My colleague Dale Smith over at our sister site, CNET, found that SpotCam notifications sometimes didn’t come through.  I had better results than Dale with my notifications. That’s probably because I immediately turned off the motion notification settings, since we live on a relatively high-traffic street. If I were to get a notification every time a car went by, I’d get nothing done. This is a good news, bad news kind of product. The good news is that the box includes a ringer that you can plug into any plug in your house. The ringer is not an add-on purchase. The bad news is that the SpotCam has spotty performance as a video doorbell. Here’s an example. I work upstairs. When someone rings the doorbell, I can hear the plug-in chime. But my phone doesn’t know there’s anyone at the door for about one-one-thousand, two-one-thousand, three-one-thousand, four-one-thousand, five-one-thousand, six-one-thou… now. That delay is annoying, but not a deal killer, especially since I can hear the downstairs chime. The big disappointment is the video part of the doorbell, as well as the response back to the person ringing the bell. More often than not, the person ringing the bell would leave before communication was established. It was that slow. It’s not my Wi-Fi. I have a couple of Wi-Fi-based cameras that track almost directly with real-time events (like, I hear a car door being shut and, a blink of the eye later, I see it on the camera). Not so with the SpotCam. The delay makes the video essentially worthless. I’m not alone in this observation. User comments and other reviewers have said the same thing.

    The video image itself is reasonable at 1080p. Not great, but not too bad. I do like that it has an SD card that stores recordings, but the SD card is built into the unit itself, so if someone steals the unit, they’re also stealing your recordings. I also like that you can see who’s at the door through
    an Echo Show

    , if you have one. But… you can’t talk to anyone at your door through your Echo Show. It seems like such a missed opportunity. There is a 7-day cloud service, which is free, so that’s nice. And I like that it not only runs off of bell power and battery, but also has a traditional AC adapter. Not all walls have old-school bell power built into them. But is this worth buying? The gotcha is that it’s about the same price as its competitors, and it’s not quite as good as its competitors. If it was a lot cheaper, I’d say try it out. But since it’s in the same pricing class, I can’t give you a really compelling reason to choose the SpotCam 2 over all the other players in this increasingly crowded field. Are you using a video doorbell? What do you like about it? What model are you using? Talk to us in the comments below. You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV. More

  • in

    Medical school exposes personal data of thousands of students

    A US medical training school exposed the personally identifiable information (PII) of thousands of students. 

    On Wednesday, vpnMentor published a report on the security incident, in which an unsecured bucket was left exposed online. The server, which did not have authentication controls in place and was, therefore, accessible by anyone to view, contained 157GB of data, or just under an estimated 200,000 files.  After discovering the open system, the researchers traced the owner as Phlebotomy Training Specialists. The LA-based organization offers phlebotomy certification and courses in states including Arizona, Michigan, Texas, Utah, and California.  According to vpnMentor, the records contained within were backed up from September 2020, but some were created before this time.  The unsecured Amazon S3 bucket contained a variety of PII including ID card and driver license copies, as well as CVs, revealing names, dates of birth, genders, photos of students, home addresses, phone numbers, email addresses, and both professional and educational summaries.  In addition, over 27,000 tracking forms were found that in some cases contained the last four digits of Social Security numbers, as well as student transcripts and training certificate scans. 
    vpnMentor

    vpnMentor

    vpnMentor’s team, led by Noam Rotem and Ran Locar, estimates that between 27,000 — 50,000 people, including course applicants and attendees, were impacted. The researchers informed Phlebotomy Training Specialists of their findings on September 7, three days after the S3 bucket’s discovery. Further attempts at contact were made but there was no response. The team then attempted to contact Amazon before reaching out to USA Cert on September 20.  The researchers told ZDNet that two buckets were eventually found, one of which has been closed — but the other remains open.ZDNet has reached out to Phlebotomy Training Specialists for comment and we will update when we hear back.  Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Google signs deal with US Air Force, announces FedRAMP High and IL4 authorizations

    Google has signed a new deal with the US Air Force Research Laboratory (AFRL) that will see scientists and engineers there use Google Workspace. The US Air Force Research Laboratory supports both the US Air Force and the US Space Force while providing new technologies for the US military. According to Google, the lab focuses on everything from laser-guided optics enabling telescopes to see deeper into the universe to fundamental science that helped create innovations in quantum computing and artificial intelligence. The US Air Force Research Laboratory will now use Smart Canvas, Google Meet, and Google Cloud technology in their work. “COVID-19 significantly limited the physical presence of researchers in the lab,” said Dr. Joshua Kennedy, a research physicist at AFRL. “Google Workspace eliminated what would have otherwise been almost a total work stoppage. In fact, new insights into 2D nanomaterials, critical to future Department of the Air Force capabilities, were discovered using Workspace that would have otherwise been impossible.” Maj. Gen. Heather Pringle added that the move was part of her efforts to modernize the technology used by AFRIL. She said the lab started experimenting with Google Workspace to supplement existing capabilities, noting that it has “revolutionized” their collaboration ability with external partners.”Our mantra is ‘collaborate to innovate.’ We want our alpha nerds to be very connected, and we really want to up their proficiency as a digital workforce where data becomes a third language,” Pringle said. “We’re incorporating digital engineering into everything we do in science and technology and have a data-informed human capital strategy.”Alongside the news of the US Air Force deal, Google Cloud vice president Mike Daniels announced that Google Workspace achieved FedRAMP High and IL4 authorization from the Defense Information Systems Agency (DISA), meaning the company will be able to collaborate more with the US military.

    “Expanding our list of compliance certifications and adding security and compliance resources is a critical part of Google Cloud’s mission to deliver agile, open architectures, unified data and analytics, and leading security solutions — along with productivity tools that support an increasingly hybrid workforce,” Daniels said in a blog post, explaining that in the US, FedRAMP and NIST frameworks “set the bar for the security of society’s most vital systems.””The weight of this responsibility is reflected in the high bar that must be met to receive FedRAMP High authorization. This is a major milestone in our longstanding commitment to serving the needs of the public sector and to making the world a safer place for everyone.”Daniels added that with the certifications, the US federal government can now deploy Google Workspace within a variety of projects. “With FedRAMP High authorization across Workspace’s public cloud offering, any customer can rest assured that they are collaborating at this high level of security, without having to purchase and deploy a separate ‘gov cloud’ instance. It also means they can operate seamlessly with relevant government agencies without additional overhead,” Daniels explained. “Another key security standard at the federal level is the Impact Level 4 (IL4) designation, which applies to controlled unclassified information (CUI). Today, we’re proud to announce that Google has earned IL4 authorization from the Defense Information Systems Agency (DISA), allowing CUI to be stored and processed across key Google Cloud services, including our compute, storage and networking offerings, data analytics, virtual private cloud, and identity and access management technologies, when used with Assured Workloads.”In April, the technology giant announced that four other products have also received FedRAMP High authorization, including Google’s Admin Console, Cloud Identity, Identity and Access Management, and the Virtual Private Cloud tools. Daniels noted that the configuration is supported in all seven US regions and “ensures IL4 workloads are supported by US personnel while being stored and processed in the United States.” “Our new IL4 and FedRAMP authorizations join other Google Cloud data privacy and security features that allow customers to comply with the FBI’s Criminal Justice Information Services (CJIS) standard and the IRS’ Publication 1075 (IRS 1075),” Daniels said.”While these are exciting developments for us, we are most excited about what it means for our public sector customers, who are working hard to achieve their missions and can now use cloud-first solutions to deliver on their mandates.” More

  • in

    Revealed: The 10 worst hardware security flaws in 2021

    MITRE, which publishes a list of top software vulnerabilities in conjunction with US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), has now published a list of the most important hardware weaknesses, too.MITRE publishes the the Common Weakness Enumeration (CWE) for software flaws, but this year has run a survey to create its first ever equivalent list for hardware flaws. 

    The 2021 Hardware List aims to boost awareness of common hardware flaws and to prevent hardware security issues by educating designers and programmers on how to eliminate important mistakes early in the product development lifecycle.SEE: Gartner releases its 2021 emerging tech hype cycle: Here’s what’s in and headed out”Security analysts and test engineers can use the list in preparing plans for security testing and evaluation. Hardware consumers could use the list to help them to ask for more secure hardware products from their suppliers. Finally, managers and CIOs can use the list as a measuring stick of progress in their efforts to secure their hardware and ascertain where to direct resources to develop security tools or automation processes that mitigate a wide class of vulnerabilities by eliminating the underling root cause,” MITRE said. The list was determined by a survey of the CWE Team and members of the hardware special interest group.The list, which isn’t in any particular order, includes bugs that affect a range of devices including smartphones, Wi-Fi routers, PC chips, and cryptographic protocols for protecting secrets in hardware, flaws in protected memory areas, Rowhammer-style bit-flipping bugs, and firmware update failures. 

    The hardware weaknesses list is meant to serve as “authoritative guidance for mitigating and avoiding them” and is a companion to its annual 25 most dangerous software weaknesses list.One submitted by Intel engineers, CWE-1231, regards “improper prevention of lock bit modification” that can be introduced during the design of integrated circuits. SEE: Cloud security in 2021: A business guide to essential tools and best practices”In integrated circuits and hardware intellectual property (IP) cores, device configuration controls are commonly programmed after a device power reset by a trusted firmware or software module (e.g., BIOS/bootloader) and then locked from any further modification,” MITRE notes. “This behavior is commonly implemented using a trusted lock bit. When set, the lock bit disables writes to a protected set of registers or address regions. Design or coding errors in the implementation of the lock bit protection feature may allow the lock bit to be modified or cleared by software after it has been set. Attackers might be able to unlock the system and features that the bit is intended to protect.” The entries also include past examples of the types of flaws, such as CVE-2017-6283, that affected the NVIDIA Security Engine. It contained a “vulnerability in the RSA function where the keyslot read/write lock permissions are cleared on a chip reset, which may lead to information disclosure.”CWE-1189Improper Isolation of Shared Resources on System-on-a-Chip (SoC)CWE-1191On-Chip Debug and Test Interface With Improper Access ControlCWE-1231Improper Prevention of Lock Bit ModificationCWE-1233Security-Sensitive Hardware Controls with Missing Lock Bit ProtectionCWE-1240Use of a Cryptographic Primitive with a Risky ImplementationCWE-1244Internal Asset Exposed to Unsafe Debug Access Level or StateCWE-1256Improper Restriction of Software Interfaces to Hardware FeaturesCWE-1260Improper Handling of Overlap Between Protected Memory RangesCWE-1272Sensitive Information Uncleared Before Debug/Power State TransitionCWE-1274Improper Access Control for Volatile Memory Containing Boot CodeCWE-1277Firmware Not UpdateableCWE-1300Improper Protection of Physical Side Channels More

  • in

    Arrests were made, but the Mekotio Trojan lives on

    Despite the arrest of individuals connected with the spread of the Mekotio banking Trojan, the malware continues to be used in new attacks. 

    On Wednesday, Check Point Research (CPR) published an analysis on Mekotio, a modular banking Remote Access Trojan (RAT) that targets victims in Brazil, Chile, Mexico, Spain, and Peru — and is now back with new tactics for avoiding detection. In October, law enforcement made 16 arrests in relation to Mekotio and the Grandoreiro Trojans across Spain. The suspects allegedly sent thousands of phishing emails to distribute the Trojan, then used to steal banking and financial service credentials.  Local media reports suggest that 276,470 euros were stolen, but transfer attempts — thankfully, blocked — worth 3,500,000 euros were made.  CPR researchers Arie Olshtein and Abedalla Hadra say that the arrests only managed to disrupt distribution across Spain, and as the group likely collaborated with other criminal outfits, the malware continues to spread.  Once the Spanish Civil Guard announced the arrests, Mekotio’s developers, suspected of being located in Brazil, rapidly rehashed their malware with new features designed to avoid detection.  Mekotio’s infection vector has stayed the same, in which phishing emails either contain links to or have a malicious .ZIP archive attached that contains the payload. However, an analysis of over 100 attacks taking place in recent months has revealed the use of a simple obfuscation method and a substitution cipher to circumvent detection by antivirus products. 

    In addition, the developers have included a batch file redesigned with multiple layers of obfuscation, a new PowerShell script that runs in memory to perform malicious actions, and the use of Themida — a legitimate application to prevent cracking or reverse engineering — to protect the final Trojan payload.  Once installed on a vulnerable machine, Mekotio will attempt to exfiltrate access credentials for banks and financial services and will transfer them to a command-and-control (C2) server controlled by its operators.  “One of the characteristics of those bankers, such as Mekotio, is the modular attack which gives the attackers the ability to change only a small part of the whole in order to avoid detection,” the researchers say. “CPR sees a lot of old malicious code used for a long time, and yet the attacks manage to stay under the radar of AVs and EDR solutions by changing packers or obfuscation techniques such as a substitution cipher.” Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Clearview AI slammed for breaching Australians' privacy on numerous fronts

    Australia’s Information Commissioner has found that Clearview AI breached Australia’s privacy laws on numerous fronts, after a bilateral investigation uncovered that the company’s facial recognition tool collected Australians’ sensitive information without consent and by unfair means. The investigation, conducted by the Office of the Australian Information Commissioner (OAIC) and the UK Information Commissioner’s Office (ICO), found that Clearview AI’s facial recognition tool scraped biometric information from the web indiscriminately and has collected data on at least 3 billion people. The OAIC also found that some Australian police agency users, who were Australian residents and trialled the tool, searched for and identified images of themselves as well as images of unknown Australian persons of interest in Clearview AI’s database.By considering these factors together, Australia’s Information Commissioner Angelene Falk concluded that Clearview AI breached Australia’s privacy laws by collecting Australians’ sensitive information without consent and by unfair means. In her determination [PDF], Falk explained that consent was not provided, even though facial images of affected Australians are already available online, as Clearview AI’s intent in collecting this biometric data was ambiguous.”I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes,” the Information Commissioner wrote. “Consent also cannot be implied if individuals are not adequately informed about the implications of providing or withholding consent. This includes ensuring that an individual is properly and clearly informed about how their personal information will be handled, so they can decide whether to give consent.”Read more: ‘Booyaaa’: Australian Federal Police use of Clearview AI detailed

    Other breaches of Australia’s privacy laws found by Falk were that Clearview AI failed to take reasonable steps to either notify individuals of the collection of personal information or ensure that personal information it disclosed was accurate. She also slammed the company for not taking reasonable steps to implement practices, procedures, and systems to ensure compliance with the Australian Privacy Principles. These breaches were due to Clearview AI removing access to an online form for Australians to opt out from being searchable on the company’s facial recognition platform. The form itself also contained privacy issues as it required Australians to submit a valid email address and an image of themselves which would then be converted into an image vector, which Falk said allowed Clearview AI to collect additional information about Australians.The form was created at the start of 2020, but now Australians can only make opt-out requests via email, Falk said. After making these findings, Falk has ordered Clearview AI to destroy existing biometric information it has collected from Australia. She has also ordered for the company to cease collecting facial images and biometric templates from individuals in Australia. “The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” Falk said. “It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.” Despite the investigation being finalised, the exact number of affected Australians is unknown. Falk expressed concern that the number was likely to be very large given that it may include any Australian individual whose facial images are publicly accessible on the internet.Providing an update on another Clearview AI-related investigation, Falk said she was currently in the process of finalising a separate investigation into the Australian Federal Police (AFP) trialling Clearview AI’s facial recognition tool.In April last year, the AFP admitted to trialling the Clearview AI platform from October 2019 to March 2020. State police from Victoria and Queensland also trialled the tool, with all three law enforcement agencies admitting to successfully conducting searches using facial images of individuals located in Australia with the tool. Falk said she would provide a determination regarding whether the AFP breached the Australian Government Agencies Privacy Code to assess and mitigate privacy risks soon. Related Coverage More