More stories

  • in

    Ransomware: The tricks used by WastedLocker to make it one of the most dangerous cyber threats

    One of the most dangerous families of ransomware to emerge this year is finding success because it’s been built to avoid anti-ransomware tools and other cybersecurity software according to security company researchers who have analysed its workings.
    WastedLocker ransomware appeared in May and has already developed notoriety as a potent malware threat to organisations by encrypting networks and demanding a ransom of millions of dollars in bitcoin in exchange for the decryption key.
    One of WastedLocker’s most recent high profile victims has been reported to be wearable tech and smartwatch manufacturer Garmin.
    WastedLocker is thought to be the work of Evil Corp, a Russian hacking crew and one of the world’s most prolific cyber criminal groups. One of the reasons they’re so successful is because they’re always developing and adapting their tools.
    Researchers at Sophos have delved into the inner-workings of WastedLocker and found that the malware goes the extra mile to help avoid detection.

    The author of the WastedLocker ransomware constructed a sequence of manoeuvres meant to confuse and evade behavior-based anti-ransomware solutions, according to the report.
    “It’s really interesting what it’s doing with mapping in Windows to bypass anti-ransomware tools,” said Chester Wisniewski, principal research scientist at Sophos. “That’s really sophisticated stuff, you’re digging way down into the things that only the people who wrote the internals of Windows should have a concept of, how the mechanisms might work and how they can confuse security tools and anti-ransomware detection,” he said.
    Many malware families use some code obfuscation techniques to hide malicious intent and avoid detection, but WastedLocker adds additional layers to this by interacting with Windows API functions from within the memory itself, where it’s harder to be detected by security tools based on behavioural analysis.
    WastedLocker uses a trick to make it harder for behavior based anti-ransomware solutions to keep track of what is going on, by using memory-mapped I/O to encrypt a file. This technique allows the ransomware to transparently encrypt cached documents in memory, without causing additional disk I/O, which can shield it from behavior monitoring software.
    Then, by the time the infection is detected it’s too late – often the first sign is when the attackers have pulled the trigger on the ransomware attack and victims find themselves faced with a ransom note demanding millions of dollars.
    The attacks are planned carefully, with the cyber criminals very hands-on throughout the entire process, which for WastedLocker campaign often begin by abusing stolen login credentials. If the accounts seized by the crooks provide administrator privileges then the attackers can ultimately do what they want.
    “If they get admin credentials, they can VPN in, they can disable the security tools. If there’s no multi-factor they’re just going to login to the RDP, VPN and admin tools,” said Wisniewski.
    He added that the coronavirus pandemic and the resultant rise in remote working have created optimal conditions for cyber criminals to conduct campaigns.
    “Because of COVID-19, I think they’re having some more success with that. Things which might have only been internally facing are now externally facing and that’s another indicator that companies might be compromised,” he explained.
    Organisations can go a long way to protecting themselves from falling victim to WastedLocker and other ransomware attacks by employing simple security procedures like not using default passwords for remote login portals and using multi-factor authentication to provide an extra barrier to hackers attempting to gain control of of accounts and systems.
    Ensuring that security patches are applied as soon as possible can also help stop organisations falling victim to malware attacks, many of which use long-known vulnerabilities to gain a foothold into networks.
    By applying these security practices, organisations can help stay protected against WastedLocker and other threats – but until these security protocols are applied across the board, ransomware will remain a problem.
    “The reality is, ransomware is not going away,” said Wisniewski.
    READ MORE ON CYBERSECURITY More

  • in

    Soon, your brain will be connected to a computer. Can we stop hackers breaking in?

    Brain-computer interfaces (BCIs) offer a direct link between the grey matter of our human brains and silicon and circuitry of computers. New technologies always bring with them new security threats, but with the human brain a single store of the most sensitive and private information it’s possible to imagine, the security stakes couldn’t be higher. 
    If we’re soon to be plugging computers directly into our brains, how can we protect that connection from those who want to attack them?

    Innovation

    The first wave of brain-computer interfaces are beginning to make their way onto the market, offering users a way of keeping tabs on their stress levels, control apps, and monitor their emotions. BCI tech is also progressing outside the consumer area, with medical researchers using them to help those with spinal injuries to move paralysed limbs and restore a lost sense of touch.
    SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)
    Ultimately, BCIs could offer a way of communicating thoughts – a form of human-machine telepathy. 

    So why would someone want to hack a BCI? 
    Being able to read the thoughts or memories of a political leader, or a business executive, could be a huge coup for intelligence agencies trying to understand rival states, or for criminals looking to steal commercial secrets or for blackmail. There’s a military angle too; the US is already looking at BCIs as a way of controlling fleets of drones or cyber defences far more effectively than is now possible – being able to hack into those systems would create a huge advantage on the battlefield.
    The consequences of an attack or data breach from a BCI could be an order of magnitude worse than other systems: leaked email logs are one thing, leaked thought logs are another. Similarly, the risks of ransomware become far greater if it’s targeted at BCIs rather than corporate systems; making it impossible to use a PC or a server is one thing; locking up the connection between someone’s brain and the wider world could be far worse. 
    BCIs could ultimately become an authentication mechanism in their own right: our patterns of brain activity are so unique they could ultimately be used as a way of permitting access to sensitive systems, which could make it worthwhile to try to copy them. “Attempts to trick such a biometric will likely be very difficult, because brainwaves are not visible (like other biometrics like a fingerprint, iris, etc.) and cannot be replicated by another person… without direct access to the person and their brain to record the person,” researchers at Israel’s Ben-Gurion University of the Negev wrote in a recent paper.
    It’s early days, but there are already some signs that security will be a key consideration. For example, researchers have already shown that BCIs could be used to get people to disclose information from their PIN numbers to their religious convictions. 
    Some of the potential threats to BCIs will be carry-overs from other tech systems. Malware could cause problems with acquiring data from the brain, as well as sending signals from the device back to the cortex, either by altering or exfiltrating the data. 
    Man-in-the-middle attacks could also be recast for BCIs: attackers could either intercept the data being gathered from the headset and replace it with their own, or intercept the data being used to stimulate the user’s brain and replace it with an alternative. Hackers could use methods like these to get BCI users to inadvertently give up sensitive information, or gather enough data to mimic the neural activity needed to log into work or personal accounts. 
    Other threats to BCI security will be unique to brain-computer interfaces. Researchers have identified malicious external stimuli as one of the most potentially damaging attacks that could be used on BCIs: feeding in specially crafted stimuli to affect either the users or the BCI itself to try to get out certain information, showing users images to gather their reactions to them, for example. Other similar attacks could be carried out to hijack users’ BCI systems, by feeding in fake versions of the neural inputs causing them to take unintended actions – potentially turning BCIs into bots, for example.
    Other attacks hinge on the introduction or removal of data from BCIs: introducing noise to diminish the signal-to-noise ratio, for example, and making the signal being received from the brain difficult or impossible to read. Similarly, attackers interfering with the noise cancellation of BCI systems – which separates the useful brain signals from the general background fuzz – could cause a denial of service: annoying if it’s an entertainment system that’s cracked, life-altering if it’s a BCI that allows someone to walk or control a wheelchair, for example. 
    SEE: Scientists are using brain-computer connections to restore a lost sense of touch
    Currently, while we know something about the effect of normal BCI use on the brain, we don’t know how an attack on a BCI could, deliberately or inadvertently, damage the grey matter. A hijacked BCI causing disruption to the way a user’s brain works sounds like a sci-fi plot, but it could certainly be possible. 
    “What type of damage will [an attack] do to the brain, will it erase your skills or disrupt your skills? What are the consequences – would they come in the form of just new information put into the brain, or would it even go down to the level of damaging neurons that then leads to a rewiring process within the brain that then disrupts your thinking?” says Dr Sasitharan Balasubramaniam, director of research at the Waterford Institute of Technology’s Telecommunication Software and Systems Group (TSSG). “It’s not only at the information level, it could also be the physical damage as well,” he says.
    Brains of BCI users will change and adapt as they learn to use the system, in the same way as they would to fresh experiences or acquiring new skills in the course of normal life. However, BCIs’ ability to cause neuroplasticity could bring with it a new level of risk. “BCIs have the potential to change the brain of the user (e.g. to facilitate motor or cognitive improvements to people with disabilities). To preserve the physical and mental integrity of the user, BCI systems need to ensure that no unauthorized person can modify their functioning,” Javier Mínguez, cofounder and CSO of neurotechnology company Bitbrain, tells ZDNet.
    So how can you protect such systems, particularly given the information they hold and the potentially disastrous effects? While BCIs themselves may still be relatively novel, the technologies needed to secure them likely won’t be: anonymisers, security standards and protocols, antivirus, and encryption are all being suggested as means of staving off BCI attacks. 
    And, like any other technologies, brain-computer interfaces will need a multi-layered security approach to keep them safe, locking down each individual element of the BCI. “I don’t think that the countermeasures would be individual solutions. Going forward, we need to integrate so many different things, from how signals are wirelessly sent to the interface that might be just outside the head, all the way to integrating that with the machine learning for determining whether it’s the right or wrong pattern [a BCI is using], and then using that to actually deter the attacks,” TSSG’s Balasubramaniam says.
    The level of risk in using BCIs also varies according to which type of system someone’s using: a headset-based, no-invasive system will get a low-quality signal and will be easy for a user to switch off and block external communication; an invasive system, meanwhile, gathers high-quality signals direct from the brain’s surface and requires surgery to disengage it fully.
    “The more accurate and powerful a BCI is, the higher the risk could be,” says Mínguez. A more comprehensive measurement of the brain will potentially contain more sensitive information and, therefore, requires more strict safety standards, as do devices that modify brain function. “This is especially relevant, because the target users of these systems are generally a vulnerable population, including patients with certain neurological disorders,” he says.
    SEE: Mind-controlled drones and robots: How thought-reading tech will change the face of warfare
    What’s more, many of the standards and principles of good tech security and data hygiene used in other systems can be brought across for use in BCIs: educating users, gathering only the minimum amount of data necessary for the system to work, locking down when, how and who can access the system, and so on. However, while the technology side of the equation may have good security precedents elsewhere, the unknown unknowns of the human brain could prove BCIs’ greatest security challenge.
    “In terms of the security of computational systems in general, this is a branch of science that is advanced enough and we probably have good enough understanding to know how to do the right thing from a technical perspective,” says Tamara Bonaci, affiliate faculty member at the National Science Foundation’s Center for Neurotechnology. 
    “What’s probably a little more interesting and likely much harder is the question of, do we know enough about the brain and about the human body and electrophysiological signals. Something that may not mean very much today might be recognised as something that is revealing sensitive information about the person tomorrow,” she warns.
    However, the complexity of the human brain also brings good news for BCI security. Unlike other typically compromised systems like smartphones and tablets, BCIs aren’t one size fits all: they require a lot of training to make them compatible with their individual user. 
    “That signal on the surface looks pretty much like white noise. It’s very hard to discern any useful information there. You kind of have to zoom in on specific parts of the signal and know exactly what you’re looking for,” Bonaci says.  More

  • in

    Cyber insurance: The moral quandary of paying criminals who stole your data

    Earlier this year, a club with around 70,000 members found itself in a pickle: Pay a ransom or risk the personal information of those members being exposed.
    In this scenario, the club paid the ransomware. It was decided that the financial hit of paying outstripped the reputational harm to that business. They handed over a handful of bitcoin totalling around $200,000 and the data was returned.
    “They felt compelled to protect the data of their members and to do that, felt paying the ransom was the right thing to do,” Emergence Insurance founder and CEO Troy Filipcevic told ZDNet.
    The ransomware was attributed to Maze. The Maze gang is primarily known for its eponymous ransomware string and usually operates by breaching corporate networks, stealing sensitive files first, encrypting data second, and demanding a ransom to decrypt files.
    Read more: Here’s a list of all the ransomware gangs who will steal and leak your data if you don’t pay

    If a victim refuses to pay, the Maze gang creates an entry on a “leak website” and threatens to publish the victim’s sensitive data in a second form ransom/extortion attempt. The victim is then given a few weeks to think over its decision, and if victims don’t give in during this second extortion attempt, the Maze gang will publish files on its portal.
    This club wasn’t the only large Maze victim faced with the conundrum as another business decided not to pay.
    “They were breached, 240GB of information was encrypted, they asked for a ransom, the company decided, ‘nope, we’re not paying’,” Filipcevic said.
    Instead, Filipcevic and his team helped the business determine the gravity of the situation.
    Emergence helped this company contact its customers — which included those in 14 European countries, which are governed by the General Data Protection Regulation (GDPR) — and everything else it required to get back on track.
    Both this seven-figure aftermath and the $200,000 ransom paid by the other company were covered by their respective cyber insurance. 
    “Not every time will a client want to pay a ransom, sometimes the client will go, ‘no, that just goes against my beliefs … I will be drawn through hot coals before I pay ransom’. Others say ‘this could detrimentally impact my business, in fact, it could sink it, so I need to be up and running as quickly as possible and we need to pay a ransom and we need to pay it now’,” Filipcevic explained.
    “There is zero guarantee that you’re going to get the data back and there is zero guarantee that they’re not going to do it again.”
    He said, however, what he’s finding is cyber criminals are acting in an almost ethical way, handing back the information if their demands are met.
    See also: A winning strategy for cybersecurity (ZDNet special report) | Download the report as a PDF (TechRepublic)
    Five and a half years ago, Filipcevic stood up Emergence as the only Australian cyber underwriting company that focuses solely on cyber insurance.
    With a focus previously on cyber insurance for micro-SME businesses, all the way up to ASX-listed companies, Emergence has now launched a personal cyber product which provides cover to families and individuals in the event of a cyber attack.
    While security vendors push their solution as a silver bullet, Filipcevic said there’s a need for cyber insurance to fill a void, that is, the financial cost involved in recovery.
    He said it’s not just a piece of paper that says, “if you have a flood, we’ll pay you out”, as it also brings cybersecurity experts and other parties to the table to clean up the mess.
    This could be PR, data entry specialists to help manually enter information that was lost, or in the case of consumer coverage, the likes of counselling services to help in the aftermath of something such as cyber bullying.
    It also covers the cost of paying a ransom, which in Emergence Insurance’s case, are for claims up to AU$1 million.
    RELATED COVERAGE More

  • in

    City of Vincent turns to automation to handle misinformation and hateful trolls

    Social media usage has unsurprisingly soared as a result of COVID-19, but in an attempt to counter the spread of misinformation on such platforms, a local council in Western Australia has been taking things into its own hands.
    The City of Vincent decided to implement what vendor SafeGuard Cyber has labelled as “advanced digital risk protection”. Within the city’s official Facebook, Instagram, LinkedIn, and YouTube channels, it has crafted custom policies aimed at capturing potential compliance issues and serving them up for review.
    Through machine learning, compliance issues such as hate speech are automatically flagged for review and account owners are alerted to compromised credentials.
    See also: Facebook says AI has a ways to go to detect nasty memes
    During Australia’s COVID-19 lockdown-like restrictions, the city experienced severe disruptions, such as the closure of city hall, but it also disrupted telephone and traditional lines of communication.

    As a result, according to the city, in the heart of the coronavirus pandemic, it experienced a 24,670% increase in the volume of Facebook messages as residents turned to new ways of seeking information and logging requests.
    It said it was important to ensure pressing news and other “correct” information were easy to access. IT teams also needed to ensure that all Facebook interactions were being managed securely and complying with state record-keeping laws.
    Most of these messages were requests for city services, and the cityʼs Facebook account was having to act as a makeshift switchboard
    “Social media is critical to engaging with the City of Vincent community. It is important that our social media interactions are managed securely, pass technical audit, and maintain state record keeping compliance,” City of Vincent executive manager of information and communication technology Peter Ferguson said.
    Ferguson said it was important for the city to protect vital communications channels from unauthorised account changes, while ensuring automated compliance via an integration with its existing document management system.
    RELATED COVERAGE
    Australia warned to not ignore domestic misinformation in social media crackdown
    Committee has been warned against outsourcing the job of deciding what is true or false in an Australian context to a handful of private US companies.
    Countering foreign interference and social media misinformation in Australia
    DFAT, the Attorney-General’s Department, and the AEC have all highlighted what measures are in place to curb trolls from spreading misinformation across social media.
    Coronavirus misinformation spreading fast: Fake news on COVID-19 shared far more than CDC, WHO reports
    Content engagement on false and misleading news about the COVID-19 virus illness is over 142 times that of legitimate and expert sources such as the CDC and WHO, according to NewsGuard.
    How Victoria Police handled the Bourke Street incident on social media (TechRepublic)
    Victoria Police’s head of reactive online communications details how his team delivered real-time emergency services through social listening. More

  • in

    Ransomware gang publishes tens of GBs of internal data from LG and Xerox

    Image: LG, Simone Hutsch, ZDNet

    The operators of the Maze ransomware have published today tens of GB of internal data from the networks of enterprise business giants LG and Xerox following two failed extortion attempts.
    The hackers leaked 50.2 GB they claim to have stolen from LG’s internal network, and 25.8 GB of Xerox data.
    While LG issued a generic statement to ZDNet in June, neither company wanted to talk about the incident in great depth today.
    Both of today’s leaks have been teased since late June when the operators of the Maze ransomware created entries for each of the two companies on their “leak portal.”
    The Maze gang is primarily known for its eponymous ransomware string and usually operates by breaching corporate networks, stealing sensitive files first, encrypting data second, and demanding a ransom to decrypt files.

    If a victim refuses to pay the fee to decrypt their files and decides to restore from backups, the Maze gang creates an entry on a “leak website” and threatens to publish the victim’s sensitive data in a second form ransom/extortion attempt.
    The victim is then given a few weeks to think over its decision, and if victims don’t give in during this second extortion attempt, the Maze gang will publish files on its portal.
    LG and Xerox are at this last stage, after apparently refusing to meet the Maze gang’s demands.
    LG incident and data
    ZDNet has been tracking both incidents since they’ve been initially announced on the Maze website in late June.
    Based on screenshots shared by the Maze gang last month and by file samples downloaded and reviewed by ZDNet today, the data appears to contain source code for the cloused-source firmware of various LG products, such as phones and laptops.

    Image: ZDNet
    In an email in June, the Maze gang told ZDNet that they did not execute their ransomware on LG’s network, but they merely stole the company’s proprietary data and chose to skip to the second phase of their extortion attempts.
    “We decided not to execute [the] Maze [ransomware] because their clients are socially significant and we do not want to create disruption for their operations, so we only have exfiltrated the data,” the Maze gang told ZDNet via a contact form on their leak site.
    When reached out for comment in June, the LG security team told ZDNet they would look into the incident and report any intrusion to authorities. In a follow-up email sent today, after the Maze gang published more than 50 GB of the company’s files, the security team deflected our request for comment towards its communications team. When we reached out to the communications team, our email bounced, similar to what happened in June.
    Xerox incident and data
    But while we have somewhat of an idea of what happened with the Maze attack on LG, things are a lot murkier when it comes to Xerox.
    The company has not returned requests for comment sent in June and today.
    It is unclear what internal systems the Maze gang encrypted, or if files were stolen and ransomed without encryption, similar to the LG incident.
    Based on a cursory review of data leaked online today, it appears that the Maze gang has stolen data related to customer support operations. At the time of writing, we found information related to Xerox employees; however, we have not yet found files holding data on Xerox customers — although, this is a large trove of information and reviewing all of it will take time.

    Image: ZDNet
    Citrix point of entry?
    In an interview with threat intelligence company Bad Packets in June, Troy Mursch, the company’s co-founder, told ZDNet that both companies ran Citrix ADC servers that at one point or another were left unpatched and vulnerable online — according to his company’s internet scans.
    The servers were vulnerable to the CVE-2019-19781 vulnerability, which Mursch described as “Maze’s favorite vector of compromise.”
    Ironically, on the same day that the Maze gang leaked LG files on its leak portal, threat intelligence firm Shadow Intelligence told ZDNet in an email that another hacker was selling access to LG America’s research and development (R&D) center on a hacking forum.
    The asking price was between $10,000 and $13,000, according to screenshots shared with ZDNet.

    Image: Shadow Intelligence (supplied) More

  • in

    Ahead of US election, Google bans ads linking to hacked political content

    Image: Element5 Digital

    Special feature

    Cyberwar and the Future of Cybersecurity
    Today’s security threats have expanded in scope and seriousness. There can now be millions — or even billions — of dollars at risk when information security isn’t handled properly.
    Read More

    Ahead of this year’s US presidential election, Google announced on Friday a new policy for its advertising platform, banning ads that promote hacked political materials.
    The new rule is set to enter into effect on September 1, 2020, Google said in a support page announcing the new rule.
    Once the rule comes into effect, third-party entities won’t be able to purchase ad space inside the Google Ads platform that link directly or indirectly to hacked content that was obtained from a political entity.
    Ads linking to news articles or other pages discussing the hacked political content are allowed, as long as the article or page to which the ad links does not link itself to the hacked political content.
    Ad buyers who break the new Google Ads Hacked political materials policy will receive a warning on their account and be asked to remove the ads or have accounts suspended after seven days.
    Learning from the 2016 Presidential Election

    The new policy was most likely set up with the events of the 2016 US Presidential Election in mind. In 2016, months before the election, Russian hackers breached servers of several political entities connected with the Democratic Party and leaked data online via websites like WikiLeaks and DC Leaks, and fake personas like Guccifer 2.0.
    The leaks spurred an intense partisan media coverage of the hacks, with online ads on different platforms promoting articles discussing and dissecting the hacked material for political gains.
    By enforcing this new rule starting next month, Google now becomes the first major ad tech company to officially ban such ads.
    Of note is that in October 2018, Twitter banned the dissemination of hacked materials on its platform, ahead of the US Midterm Elections.
    The ban targeted all hacked material, and not just files obtained from political entities. Since tweets can be “promoted, the ban on tweeting links to hacked content effectively also became an unofficial ban on Twitter ads as well.
    Second new ad policy targets influence campaigns
    Furthermore, also on Friday, Google announced a second new rule for its advertising platform. Called the Google Ads Misrepresentation policy, this new rule bans multiple entities from coordinating, lying about their identity, and then promoting ads on matters of “politics, social issues, or matters of public concern.”
    In other words, this is a ban on so-called “influence campaigns” that promote controversial topics that may be used to influence public opinion and political agendas in a specific region of the globe.
    Google said it will begin enforcing this second policy on September 1, 2020 in the United States, and on October 1, 2020, in all other countries. More

  • in

    Uniting for better open-source security: The Open Source Security Foundation

    Open Source

    Eric S. Raymond, one of open-source’s founders, famously said, “Given enough eyeballs, all bugs are shallow,” which he called “Linus’s Law.” That’s true. It’s one of the reasons why open-source has become the way almost everyone develops software today. That said, it doesn’t go far enough. You need expert eyes hunting and fixing bugs and you need coordination to make sure you’re not duplicating work.  So, it is more than past time that The Linux Foundation started the Open Source Security Foundation (OpenSSF). This cross-industry group brings together open-source leaders by building a broader security community. It combines efforts from the Core Infrastructure Initiative (CII), GitHub’s Open Source Security Coalition, and other open-source security-savvy companies such as GitHub, GitLab, Google, IBM,  Microsoft, NCC Group, OWASP Foundation, Red Hat, and VMware. Since open source has become vital to technology and affects all users, the open-source supply chain of contributors and dependencies must have its security verified from start to finish. It will start doing that by unifying existing open-source security initiatives CII, which was founded in response to the 2014 Heartbleed bug, and the Open Source Security Coalition. Jamie Cool, GitHub’s VP of Product Management, Security, said in a statement: 

    GitHub founded the Open Source Security Coalition in 2019 to bring together industry leaders around this mission and ensure the consumption of open source software is something that all developers can do with confidence. We look forward to this next step in the evolution of the coalition, and serving as a founding member of the Open Source Security Foundation.

    Microsoft, once an open-source enemy, is also throwing its resources behind the new foundation. Mark Russinovich, Microsoft Azure’s Chief Technology Officer, blogged, “As open source is now core to nearly every company’s technology strategy, securing open-source software is an essential part of securing the supply chain for every company, including our own. As with everything open source, building better security is a community-driven process.”
    Russinovich also spelled out what you can expect to see from the OpenSSF:

    Identifying security threats to open-source projects
    Helping developers to better understand the security threats that exist in the open-source software ecosystem and how those threats impact specific open source projects.
    Security tooling
    Providing the best security tools for open source developers, making them universally accessible, and creating a space where members can collaborate to improve upon existing security tooling and develop new ones to suit the needs of the broader open source community.
    Security best practices
    Providing open-source developers with best practice recommendations, and with an easy way to learn and apply them. Additionally, we have been focused on ensuring best practices will be widely distributed to open source developers and will leverage an effective learning platform to do so.
    Vulnerability disclosure
    Creating an open-source software ecosystem where the time to fix a vulnerability and deploy that fix across the ecosystem is measured in minutes, not months.

    Red Hat, a leading Linux and cloud company, agrees. Chris Wright, Red Hat’s CTO said, “Now, more than ever, is the time for us to join together with other leaders to help ensure key projects are secure and consumable in our products, across enterprises, and as part of the hybrid cloud. We are excited to help found this Open Source Software Foundation.”
    “We believe open source is a public good and across every industry, we have a responsibility to come together to improve and support the security of open-source software we all depend on,” concluded Jim Zemlin, The Linux Foundation’s executive director. “Ensuring open-source security is one of the most important things we can do and it requires all of us around the world to assist in the effort. The OpenSSF will provide that forum for a truly collaborative, cross-industry effort.” Moving forward, the Foundation’s governance, technical community, and its decisions will be done in a transparent way. In addition, all resulting specifications and projects will be vendor-agnostic. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open-source security for all.  The group will use an open governance structure model. This includes a Governing Board (GB), a Technical Advisory Council (TAC), and a separate oversight for each working group and project. OpenSSF intends to host open-source security initiatives on GitHub. Related Stories: More

  • in

    CISA, DOD, FBI expose new Chinese malware strain named Taidoor

    Image: ZDNet

    Three agencies of the US government have published today a joint alert on Taidoor, a new strain of malware that has been used during recent security breaches by Chinese government hackers.
    The alert has been authored by the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (DHS CISA), the Department of Defense’s Cyber Command (CyberCom), and the Federal Bureau of Investigations (FBI).
    The three agencies have recently begun collaborating on releasing joint reports about new malware threats. The first joint alert was sent earlier this year, in February, when the three agencies warned about six new malware strains developed by North Korea’s state-sponsored hackers.
    Taidoor — new Chinese remote access trojan
    Their most recent joint alert, however, warns about new Chinese malware.
    Named Taidoor, according to the three agencies, this new malware has versions for 32- and 64-bit systems and is usually installed on a victim’s systems as a service dynamic link library (DLL).

    This DLL contains two other files.
    “The first file is a loader, which is started as a service. The loader decrypts the second file, and executes it in memory, which is the main Remote Access Trojan (RAT).”
    The Taidoor RAT is then used to allow Chinese hackers to access infected systems and exfiltrate data or deploy other malware — the usual things for which remote access trojans are typically employed.
    The FBI says Taidoor is normally deployed together with proxy servers to hide the true point of origin of the malware’s operator.
    Taidoor has been used in the wild since 2008
    While the joint alert introduces the cyber-security world to a new threat, in a tweet earlier today, US Cyber Command said the malware has been around and silently deployed on victim networks for at least 12 years, since 2008.

    The three agencies have put out today a joint Malware Analysis Report (MAR) that contains recommended mitigation techniques and suggested response actions for organizations that want to improve detection, prevent infections, or have been infected already and need to remove the malware from their systems.
    US Cyber Command has also uploaded four samples of the Taidoor malware on the VirusTotal portal [1, 2, 3, 4], from where cyber-security firms and independent malware analysts can download the files for further analysis and hunt for additional clues.
    After the joint alert went out, Florian Roth, a malware analyst with Nextron Systems, said he has been previously detecting Taidoor samples, some dating as far back as March 2019, but under the name of Taurus RAT. More