More stories

  • in

    US offers $10 million reward for hackers meddling in US elections

    Image: Department of State

    The US Department of State announced today rewards of up to $10 million for any information leading to the identification of any person who works with or for a foreign government for the purpose of interfering with US elections through “illegal cyber activities.”
    This includes attacks against US election officials, US election infrastructure, voting machines, but also candidates and their staff.
    The announcement was made today, less than 100 days until the 2020 US Presidential Election that will have incumbent Donald Trump face off against Democrat candidate Joe Biden.
    Nevertheless, the Department of State said the reward is valid for any form of election hacking, at any level, such as elections held at the federal, state, or local level as well.
    “Foreign adversaries could employ malicious cyber operations targeting election infrastructure, including voter registration databases and voting machines, to impair an election in the United States,” the State Department said today, describing the attacks it fears and wants to stop.
    “Such adversaries could also conduct malicious cyber operations against U.S. political organizations or campaigns to steal confidential information and then leak that information as part of influence operations to undermine political organizations or candidates.”
    The intent is to catch and prosecute any foreign state-sponsored hackers, the Department of State said, describing the ability of foreign state-sponsored hackers to meddle in US elections “an unusual and extraordinary threat to the national security and foreign policy of the United States.”
    Third reward offered by the US for foreign hackers this year
    The reward will be paid through Department of State’s “Rewards for Justice” program, and it only applies to information provided about the activities of hackers associated with foreign governments that may try to meddle in the US election process — and not just any hackers.
    This is the third major reward offered for information on hackers through the Rewards for Justice program. In April, State officials offered a $5 million reward for information leading to the identification and capture of North Korean government hackers. US officials found North Korean hackers responsible for a large number of cyber-attacks focused on financial gain in recent years, most outside the normal spectrum of intelligence gathering that’s quietly accepted by most countries.
    In addition, in July, the State Department also offered its second major rewards for foreign hackers when it offered two separate $1 million rewards for information on two Ukrainian hackers linked to a breach at the US Security and Exchange Commission in 2016.
    Today’s rewards offer also comes as the 2016 US Presidential Election was marred by foreign interference from Russia, accused of breaching the servers of the Democratic National Committee (DNC) and then slowly leaking information for months to sway public opinion through slanted and partisan media coverage toward future president Donald Trump. President Barrack Obama imposed sanctions on Russian hackers before leaving office. More

  • in

    Black Hat: When penetration testing earns you a felony arrest record

    “Uh, we’re in jail.”
    When Coalfire inked a deal with the State Court Administration (SCA) to conduct security testing at the Dallas County Courthouse in Iowa, two of their team members being arrested at midnight and thrown behind bars was not quite what the company expected. 

    Black Hat 2020

    The saga began in September last year when security experts, Coalfire Systems senior manager Gary Demercurio and senior security consultant Justin Wynn, set out to test the court’s physical security. 
    Known as penetration testing in the cybersecurity field, testing a company or organization’s security posture can involve probing networks, apps, and websites to find vulnerabilities that need to be fixed before attackers find them and exploit them for nefarious purposes. 
    However, penetration testing can also include physical elements. Is it possible to access a company office through social engineering and pretending to be a guest? Are people dressed as maintenance staff challenged at the gates? Are doors to sensitive areas properly secured?
    In the Iowa court’s case, how quickly does law enforcement respond in the case of a break-in? 
    As ZDNet previously reported, the penetration test deal agreed between the SCA and Coalfire resulted in Demercurio and Wynn setting out in the dead of night to test the security of court buildings.
    Speaking at Black Hat USA on Wednesday, Demercurio and Wynn said that after-hours testing, at night, was originally only what the client wanted — and this was then extended to day and evening testing.
    Before the test took place, Coalfire “went through the scope, building by building,” to make sure there was no miscommunication between the cybersecurity firm and the client in terms of what buildings could be targeted, and what should be avoided.
    See also: Cybersecurity 101: Protect your privacy from hackers, spies, and the government
    Under the terms of the contract, the team was permitted to use social engineering to impersonate staff and contractors, use false pretenses to try and gain access, tailgate employees, and access restricted areas — on the proviso that alarm systems were kept intact and no damage was caused on entry. 
    At the beginning of the test on Sunday night, a state trooper on patrol came across the team attempting to enter a door and was satisfied after proof of identification was provided by the researchers, noting that similar tests had been conducted in the past. 
    After the first test — and after the discovery of a Coalfire calling card in the IT room the next day — the client congratulated the team via email. So far, the penetration testers were satisfied that it was a “green flag” to go ahead. 
    On Tuesday, September 11, a courthouse door was found open. The researchers closed it as their mission was not to simply to walk in but test the physical security of the building, and after allowing it to lock, applied their tools to jimmy the lock back open again.
    An alarm was sounded at 12.30 am, and the pen test team waited patiently on the third floor for law enforcement to arrive, brandishing their contract as a ‘get out of jail free card’ to prove they were there legally for after-hours testing.
    “Credit where credit is due — it was the fastest response time we’ve ever seen, literally three minutes,” Wynn commented. 
    After shouting for five to seven minutes to make themselves and their purpose known, without receiving a response, Demercurio and Wynn made their way down the stairwell, hands raised.
    The tone, at least at the beginning, was cordial and law enforcement on-site accepted Coalfire’s employees were there on legitimate business, quickly deciding to let them go. However, the team was having a “lot of fun” talking to them, and so decided to hang around and swap stories. 
    This, it seems, was a mistake, as Dallas County Sheriff Chad Leonard was en route.
    Once Leonard arrived on scene, the tone “dramatically changed.”  
    “Up until the point the Sheriff arrived, we were treated with the utmost respect and like professionals,” Demercurio said.
    In footage shown at the Black Hat presentation, Leonard calls the situation “bullsh*t.” 
    In a past interview, Leonard said the team was “crouched down like turkeys peeking over the balcony” when law enforcement arrived, and suggested both the date — September 11 — and the fact they were carrying backpacks gave more cause for alarm at the presence of the “unknown persons.”
    CNET: The best home security camera of 2020
    Demercurio and Wynn were arrested and jailed for roughly 20 hours. In chains, strung up together, the researchers were then “paraded” to the courthouse they’d just broken into, to be berated by the judge, despite the researchers protesting that they were hired by the state. 
    Originally, bail was set at $7,000 for each Coalfire employee, but it was argued the pair was a flight risk and so the amount was increased to $50,000 each.
    Charges of burglary in the third degree and the possession of burglary tools were set. This was later downgraded to trespass, and after discussions between Coalfire’s CEO, the Dallas County Sheriff, and Dallas County Attorney Charles Sinnard, all charges were dropped — a day before Coalfire’s motion to dismiss was set to go through. However, this process has taken months.  
    TechRepublic: Security analysts: Industry has not solved the talent gap or provided clear career paths
    The SCA said in a statement at the time that the organization “did not intend, or anticipate, those efforts to include the forced entry into a building.” However, the researchers dismiss this, saying at Black Hat that the tests were not out of scope. 
    “It was the intention of the Dallas County Sheriff to protect the citizens of Dallas County and the State of Iowa by ensuring the integrity of the Dallas County Courthouse,” Coalfire said in a statement. “It was also the intention of Coalfire to aid in protecting the citizens of the State of Iowa, by testing the security of information maintained by the Judicial Branch, pursuant to a contract with the SCA.”
    Demercurio and Wynn have been left with permanent felony arrest records. Unable to be scrubbed and despite the charges being dropped, the records are likely to hinder their future prospects in security work. 
    “This is severely detrimental for us to try and undergo these types of engagements in the future,” Wynn noted. 
    The issue at hand may have been the interpretation of the penetration agreement itself or heavy-handedness by law enforcement and a court system concerned with liability, but lessons can still be learned by the cybersecurity industry, police and any organization considering a penetration test to improve their security. 
    Demercurio and Wynn urge penetration companies to record every call made between company and client as a basic level of protection against similar situations in the future. In addition, the pair are trying to establish a “good samaritan” law which could be passed to protect penetration testing companies — and their employees — from similar lawsuits. 
    “All offensive security has effectively been axed in the state of Iowa, and that’s the crux of the matter,” Demercurio commented. “We’re trying to help people [..] we want to make things better, we want to protect them, and the real losers are the citizens of Iowa.”

    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Black Hat: How hackers gain root access to SAP enterprise servers through SolMan

    Researchers have demonstrated how a set of vulnerabilities in SAP Solution Manager could be exploited to obtain root access to enterprise servers. 

    Black Hat 2020

    Speaking at Black Hat USA on Wednesday, Onapsis cybersecurity researchers Pablo Artuso and Yvan Genuer explained how the bugs were found in SAP Solution Manager (SolMan), a system comparable to Windows Active Directory. 
    SolMan is a centralized application designed to manage IT solutions on-premise, in the cloud, or in hybrid environments. The integrated solution acts as a management tool for business-critical applications, including SAP and non-SAP software.
    See also: Cybersecurity 101: Protect your privacy from hackers, spies, and the government
    An estimated 87% of the Global 2000 uses SAP in some way, and so vulnerabilities left unpatched could have severe consequences. With this in mind, Onapsis Research Labs conducted a security assessment of SolMan in 2019.
    According to the cybersecurity firm, the vulnerabilities found in SolMan — called the “technical heart of the SAP landscape” by Onapsis — could allow unauthenticated attackers to compromise “every system” connected to the platform, including SAP ERP, CRM, HR, and more. 
    SolMan operates by linking to software agents on SAP servers via a function called SMDAgent, otherwise known as the SAP Solution Manager Diagnostic Agent. SMDAgent facilitates communication and instance monitoring and is generally installed on servers running SAP applications. 
    SolMan itself can be accessed via its own server or the SAPGui. The team tested a SolMan setup and apps related to SMDAgent, and in total, roughly 60 applications were accounted for, and over 20 of them were accessible via HTTP GET, POST, or SOAP requests. 
    One application, SolMan’s End user Experience Monitoring (EEM), was found to be a potentially vulnerable endpoint as it does not require authentication to access. EEM allows SAP administrators to create scripts to emulate user actions and deploy them to EEM robots present in other systems.
    Therefore, in tandem with a lack of sanitization noted in expression JavaScript code, it would be possible for unauthenticated attackers to deploy a malicious script to this function for execution without authentication — compromising all SMDAgents connected to SolMan. 
    This remote code execution (RCE) vulnerability has been assigned CVE-2020-6207 and a CVSS score of 10.0. 
    Onapsis also uncovered two other vulnerabilities in SolMan. The first, tracked as CVE-2020-6234 (CVSS: 7.2), was found in the SAP Host Agent and permitted threat actors who had already obtained administrator privileges to abuse the operation framework to gain root-level privileges.
    The other vulnerability of note is CVE-2020-6236 (CVSS: 7.2), which was also found in the SAP Host Agent. This bug existed in the SAP Landscape Management and SAP Adaptive Extensions modules specifically and also permitted privilege escalation as long as an attacker possessed admin_group privileges.
    Chaining these vulnerabilities could give remote attackers the ability to execute files — including malicious payloads — as a root user, granting them overall control of SMDAgents connected to SolMan. 
    CNET: The best home security camera of 2020
    Speaking to ZDNet, Onapsis said that while SolMan does not generally hold business data in itself, the system is always connected to other production satellites and so the successful compromise of this component could have severe consequences for an enterprise at large. 
    After hijacking SolMan, unauthenticated attackers could read and modify financial records or bank details, access user data, close down business-critical systems at will, and potentially “expand attacks beyond SAP scope as root/system accessed is achieved,” according to the team. 
    CVE-2020-6207 was reported to SAP on February 2 and by February 12, the bug was confirmed and an internal tracking number was issued. Onapsis then worked with the tech giant to provide additional technical details, leading to a fix on March 10. 
    CVE-2020-6234 and CVE-2020-6236 were disclosed privately to SAP on December 9, 2019. These issues took longer to resolve and it was not until April 4 that a CVSS severity score was agreed. A patch was provided on April 13. 
    TechRepublic: Security analysts: Industry has not solved the talent gap or provided clear career paths
    “SAP systems are complex and in most cases highly customized making the patch process much more difficult,” Onapsis researchers told ZDNet. “SAP SolMan, in particular, is usually overlooked when it comes to security, due to its lack of business data. We hope […] that people will understand why securing SAP SolMan should not be overlooked and be a priority to keeping the entire SAP landscape and the organization’s most critical applications protected.”
    In July, SAP released a fix for RECON (CVE-2020-6287), a CVSS 10.0 critical vulnerability also found by Onapsis. If exploited, the vulnerability — found within the SAP NetWeaver Java technology stack — allows attackers to create SAP user accounts with full privileges for SAP applications exposed to the Internet. 

    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Twitter patches Android app to prevent exploitation of bug that can grant access to DMs

    Twitter has started notifying users today about a dangerous security issue that can allow malicious Android apps running on users’ devices to access private Twitter data, including users’ direct messages (DMs).
    According to a support document published today, Twitter said the bug existed because of an underlying vulnerability in the Android operating system itself.
    Twitter didn’t specifically identify the Android OS bug, for safety reasons, but said the issue had been fixed since October 2018.
    According to Twitter, the Android OS bug only impacted users of Android 8 (Oreo) and Android 9 (Pie), but not those on Android 10.
    “Our understanding is 96% of people using Twitter for Android already have an Android security patch installed that protects them from this vulnerability,” Twitter said today.
    “For the other 4%, this vulnerability could allow an attacker, through a malicious app installed on your device, to access private Twitter data on your device (like Direct Messages) by working around Android system permissions that protect against this.”
    The company social network is now notifying users of the bug and urging them to update the “Twitter for Android” app if they’re using Android 8 or Android 9, where the issue can still be exploited.
    The following message is currently being shown to users who are currently using an unpatched Android OS version or have used a vulnerable Android OS version in the past.

    Image: ZDNet
    Twitter didn’t say how it found out about the issue but said that it hadn’t found any evidence the bug had been exploited in the wild prior to today. However, the company said that it wasn’t “completely sure” about this latter assessment.
    The issue didn’t impact users utilizing the company’s iOS app or web portal. More

  • in

    Black Hat: How your pacemaker could become an insider threat to national security

    When we think of pacemakers, insulin pumps, and other implanted medical devices (IMDs), what comes to mind is their benefit to users that rely on them to cope with various medical conditions or impairments. 

    Black Hat 2020

    Over time, IMDs have evolved to become more refined and smarter with the introduction of wireless connectivity — linking themselves to online platforms, the cloud, and mobile apps with connections made via Bluetooth for maintenance, updates, and monitoring, all in order to improve patient care. 
    However, the moment you introduce such a connection into a device, whether external or internal, this also creates a potential avenue for exploit. 
    The emerging problem of vulnerabilities and avenues for attack in IMDs was first highlighted by the 2017 case of St. Jude (now under the Abbott umbrella), in which the US Food and Drug Administration (FDA) issued a voluntary recall of 465,000 pacemakers due to vulnerabilities that could be remotely exploited to tamper with the life-saving equipment. 
    Naturally, these devices could not just be pulled out, sent in, and swapped for a new model. Instead, patients using the pacemakers could visit their doctor for a firmware update, if they so chose.
    More recently, CyberMDX researchers estimated that 22% of all devices currently in use across hospitals are susceptible to BlueKeep, a Windows vulnerability in the Microsoft Remote Desktop Protocol (RDP) service. When it comes to connected medical devices, this figure rises to 45%. 
    According to Christopher Neal, CISO of Ramsay Health Care, many devices we use today are not built secure-by-design, and this is an issue likely to shadow medical equipment for decades to come. 
    At Black Hat USA on Wednesday, Dr. Alan Michaels, Director of the Electronic Systems Lab at the Hume Center for National Security and Technology at the Virginia Polytechnic Institute and State University, echoed the same sentiment.
    Micheals outlined a whitepaper viewed by ZDNet and penned by the professor himself, alongside Zoe Chen, Paul O’Donnell, Eric Ottman, and Steven Trieu, that investigated how IMDs could compromise the security of secure spaces — such as those used by military, security, and government agencies. 
    Across the US, many agencies ban external mobile devices and Internet-connected products including smartphones and fitness trackers in compartmentalized, secure spaces on the grounds of national security. 
    If fitness trackers or smartphones are considered a risk, they can simply be handed in, locked away in a secure locker, and collected at the end of the day. However, IMDs — as they are implanted — are often overlooked or exempt entirely from these rules. 
    The professor estimates that over five million IMDs have been installed — approximately 100,000 of which belong to individuals with US government security clearance — and their value to users cannot be overlooked. This does not mean, however, that they may not pose a risk to security, and should their devices become compromised, users may unwittingly become insider threats.
    “Given that these smart devices are increasingly connected by two-way communications protocols, have embedded memory, possess a number of mixed-modality transducers, and are trained to adapt to their environment and host with artificial intelligence (AI) algorithms, they represent significant concerns to the security of protected data, while also delivering increasing, and often medically necessary, benefits to their users,” Michaels says. 
    Pacemakers, insulin pumps, hearing implants, and other IMDs that are vulnerable to exploit could be weaponized to leak GPS and location data, as well as other potentially classified datasets or environmental information relating to the secure space, gathered from inbuilt sensors, microphones, and transducers that convert information from the environment into signals and data. 
    See also: Cybersecurity 101: Protect your privacy from hackers, spies, and the government
    For example, there are smart hearing aids on the market that are linked to cloud architecture and use machine learning (ML) to record and analyze sounds for feedback and to improve its performance — but if compromised, this functionality could be hijacked. 
    GPS-based and passive data collection devices are considered low-risk, whereas gadgets using open source code, with cloud functionality, AI/ML, or voice activation are considered medium to high-risk. 
    When they are external and portable, medium to high-risk devices are generally banned from secure spaces, but many IMDs now also fall into these categories and have fallen through legislative cracks.   
    The issue is that IMDs are difficult, or impossible, to remove or disable while in a secure facility. It is not possible, either, to simply refuse access to secure spaces by IMD users as this would break discrimination laws.
    CNET: The best home security camera of 2020
    In addition, external mitigations have been proposed, including: 
    Whitelisting: Pre-approving a set list of IMDs considered secure enough. However, this requires checks and consistency across different agencies. 
    Random inspections: Devices previously approved would need to have their settings inspected — but policing this, in reality, would be difficult, especially as it may require access to proprietary vendor data. 
    Ferromagnetic detection: Using detectors to identify implants or other foreign devices/IMDs before an individual enters a facility, to make sure they are on an approved list. 
    Zeroization: Inspecting and clearing data from the device before it leaves the secure space could improve information security, but this would require safe ways to wipe information from life-saving devices — a daunting and potentially dangerous prospect. 
    Physical signal attenuation: Becoming a walking Faraday cage to stop signals while in a security facility — such as by wearing a foil vest — has also been proposed, but as noted by Michaels, this is likely to be “cumbersome” in practice. 
    Administrative software: Code could be developed to put IMDs in a form of “airplane” mode — but this will require investment, time, and testing by developers. 
    Personal jamming: Wearers could enable a jammer to create enough noise to stop information being transmitted. However, this may impact battery life.
    The team says that the advances made in the IMD field have “far outpaced” current security directives, creating a need for new policy considerations, and has called for amendments to Intelligence Community Policy Memorandum (ICPM) 2005-700-1, Annex D, Part I (.PDF) to include smart IMDs to remain compliant with Intelligence Community Policy Guidance (ICPG) 110.1 (.PDF). 
    Speaking to ZDNet, Michaels said that the simplest way to prevent IMDs from becoming a threat in secure facilities is to physically shield a device — and this is likely to be far safer in comparison to modifying firmware, as “that may create an untested operational state that (although very unlikely) could impact its intended operations or health of the user.”
    TechRepublic: Security analysts: Industry has not solved the talent gap or provided clear career paths
    The professor added that the security issues surrounding IMDs are likely to increase over time, and as they become more capable, security will become a balancing act between legislation, what vendors consider to be “privacy,” and battery consumption — one of the few elements constraining how far IMDs can go in terms of intelligent technologies. 
    “Moreover, I think that as the number of devices implanted increases, they become a more feasible target for malicious actors — given the expected lifetimes of many devices being 10+ years, the question almost becomes “how hard is it to hack a 10-year old IoT device,” Michaels commented. “Maybe not an immediate threat, but an increasing one over time, and very hard to enact a recall / firmware update.”

    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    New EtherOops attack takes advantage of faulty Ethernet cables

    Image: Armis

    Black Hat 2020

    Tomorrow at the Black Hat USA security conference, security researchers from IoT research outfit Armis are set to present details about a new technique that can be used to attack devices located inside internal corporate networks.
    The technique, named EtherOops, works only if the targeted network contains faulty Ethernet (networking) cables on the attacker’s path to their victim.
    The EtherOops technique is only a theoretical attack scenario discovered in a laboratory setting by the Armis team and is not considered a widespread issue that impacts networks across the world in their default states.
    However, Armis warns that EtherOops could be weaponized in certain scenarios by “sophisticated attackers (such as nation-state actors)” and can’t be discounted for now.
    How EtherOops works
    The EtherOops attack is basically a packet-in-packet attack.
    Packet-in-packet attacks are when network packets are nested inside each other. The outer shell is a benign packet, while the inner one contains malicious code or commands.
    The outer packet allows the attack payload to slip by initial network defenses, such as firewalls or other security products, while the inner packet attacks devices inside the network.
    But networking packets don’t typically change their composition and lose their “outer shells.” Here is where the faulty Ethernet cables come into play.
    Armis says that faulty cables — either due to imperfect cabling, or malicious interference attacks — will suffer from unwanted electrical interference and flip bits inside the actual packet, slowly destroying the outer shell and leaving the internal payload active.

    Image: Armis
    “Complicated? Yes, but not impossible,” the Armis team said describing EtherOops attacks. However, when successful, an EtherOops attack can be used to:
    Penetrate networks directly from the Internet
    Penetrate internal networks from a DMZ segment
    Move laterally between various segments of internal networks
    EtherOops attacks have a low chance of success
    But Armis experts are also the first ones to admit that an EtherOops attack is not simple to pull off, and requires special conditions. For starters, faulty cables must exist inside a network at key positions.
    Second, while zero-click (no user interaction) attacks can be performed in some situations, most scenarios will most likely require luring a user on a malicious website in order to give the attacker a direct connection to a user inside a corporate network, so they can deliver their payloads.
    Third, bit-flip errors aren’t particularly high, meaning the attack effectively means bombarding networks with large quantities of packets, hoping for a lucky bit-flip that ends up exposing the attacker’s payload, all of this providing a very low percentage for a successful attack.
    Nevertheless, Armis says the attack can be pulled off by determined attackers. The easiest way to protect against these attacks is either by using shielded Ethernet cables, or by using network security products capable of detecting packet-in-packet payloads insider network traffic.
    Armis launched today a special website to describe the EtherOops attack, published two demo videos of EtherOops attacks (see below), a 44-page white paper describing the attack at a technical level.
    [embedded content]
    [embedded content] More

  • in

    FBI issues warning over Windows 7 end-of-life

    Image: Microsoft

    The Federal Bureau of Investigation has sent a private industry notification (PIN) on Monday to partners in the US private sector about the dangers of continuing to use Windows 7 after the operating system reached its official end-of-life (EOL) earlier this year.
    “The FBI has observed cyber criminals targeting computer network infrastructure after an operating system achieves end of life status,” the agency said.
    “Continuing to use Windows 7 within an enterprise may provide cyber criminals access in to computer systems. As time passes, Windows 7 becomes more vulnerable to exploitation due to lack of security updates and new vulnerabilities discovered.
    “With fewer customers able to maintain a patched Windows 7 system after its end of life, cyber criminals will continue to view Windows 7 as a soft target,” the FBI warned.
    FBI urges companies to update devices
    The Bureau is now asking companies to look into upgrading their workstations to newer versions of the Windows operating system.
    To this day, Microsoft still allows Windows 7 systems to be upgraded to Windows 10 at no cost — even if this offer officially ended in July 2016.
    However, in some cases, the PC’s underlying hardware may not support the (free) upgrade to a much more powerful system like Windows 10, a challenge that the FBI acknowledged in its alert, citing costs that companies might need to support to buy new hardware and software.
    “However, these challenges do not outweigh the loss of intellectual property and threats to an organization,” the FBI said — suggesting that companies should keep an eye on the bigger picture down the road and how future losses from possible hacks might easily outweigh today’s upgrade costs.
    The agency specifically cited the previous Windows XP migration debacle as the perfect example of why companies should migrate systems as soon as possible, rather than delay.
    “Increased compromises have been observed in the healthcare industry when an operating system has achieved end of life status. After the Windows XP end of life on 28 April 2014, the healthcare industry saw a large increase of exposed records the following year,” the FBI said.
    Weaponized Windows 7 vulnerabilities already exist
    Furthermore, the FBI also cited several powerful Windows 7 vulnerabilities that have been frequently weaponized over the past few years.
    This includes the EternalBlue exploit (used in the original WannaCry and by multiple subsequent crypto-mining operations, financial crime gangs, and ransomware gangs) and the BlueKeep exploit (which allows attackers to break into Windows 7 devices that have their RDP endpoint enabled).
    The agency said that despite the presence of patches for these issues, companies have failed to patch impacted systems. In this case, replacing older and abandoned systems may be the best solution overall.
    While companies are looking into upgrading systems, the FBI recommends that they also look into:
    Ensuring anti-virus, spam filters, and firewalls are up to date, properly configured, and secure.
    Auditing network configurations and isolate computer systems that cannot be updated.
    Auditing your network for systems using RDP, closing unused RDP ports, applying two-factor authentication wherever possible, and logging RDP login attempts. More

  • in

    Google: This Android PIN-protected 'Safe' folder lets you lock away private files

    Google is adding a new feature to its Files by Google app for Android phones to let users lock and hide private files in an encrypted folder. 
    The new Safe folder feature is aimed at people who, for example, share a phone with other members of the family but need to keep some files private. It could come in handy when users need to hide files from kids or spouses and ensure those important private files don’t get shared or deleted.   

    Safe Folder is a new feature of the popular Files by Google file management app, which lets users clear space as well as find, share files offline and backup files to the cloud.
    SEE: 5G smartphones: A cheat sheet (free PDF) (TechRepublic)
    Safe Folder is available in the Collections section of Files by Google and requires the user to set up a four-digit PIN on the Safe Folder and then begin moving any file into that protected folder. Once the files are in the folder, the user will need to enter the PIN to view them. It can be used to protect documents, images, videos and audio files.  
    The feature is available for Android 8.0 and above. There are some limitations. Google notes that users can’t move installed apps to the Safe Folder, for example. 
    However, files in the Safe Folder won’t appear in search results and the folder can’t be opened by third-party apps. Also, there’s no option to share files in the folder or to back them up to Google Drive. 
    “The folder is securely locked as soon as you switch away from the Files app, so none of its contents can be accessed when the app is in the background,” Google notes in a blogpost. 
    “As a security assurance, it will ask for your PIN again on re-entry. Even people who don’t share devices can benefit from keeping the most important files safe.”
    SEE: The growing case for Windows support of Google Play
    Files by Google was launched in 2017 as a feature mostly for budget Android Go phones. Google says it now has 150 million people who use the app each month to manage files on their phones. 
    In that time, it’s been used to delete over one trillion files and collectively saved over 400 petabytes of storage space on Android phones. 
    Safe Folders is rolling out in beta for Files by Google and it will be made generally available in the coming weeks. 

    Smartphones More