More stories

  • in

    CrowdStrike acquires Humio for $400 million

    Image: CrowdStrike
    Cyber-security powerhouse CrowdStrike announced today it acquired log management firm Humio for $400 million in a deal expected to close at the end of Q1 2021.

    The deal was confirmed in statements posted by both companies on their websites.
    Humio, a London-based startup that launched in 2016, provides products to simplify streaming, aggregating, and managing logs collected from large cloud-based enterprise networks.
    The company previously raised $32 million across two funding rounds and lists customers such as Bloomberg, HPE, Lunar, M1 Finance, Michigan State University, and SpareBank 1.
    CrowdStrike said it plans to integrate Humio’s expertise in log aggregation to upgrade its eXtended Detection and Response (XDR) offering.
    XDR products are a new breed of security products, considered an upgrade to classic EDR (Endpoint Detection and Response) solutions, and deployed in companies where internal networks also include many server and cloud-based products, for which classic EDR solutions aren’t typically equipped to monitor.
    “We conducted a thorough market review of existing solutions and were amazed by Humio’s mature technology architecture and proven ability to deliver at scale,” said George Kurtz, co-founder and chief executive officer of CrowdStrike.

    “The combination of real-time analytics and smart filtering built into CrowdStrike’s proprietary Threat Graph and Humio’s blazing-fast log management and index-free data ingestion dramatically accelerates our XDR capabilities beyond anything the market has seen to date.”
    A CrowdStrike spokesperson did not return a request for comment inquiring if Humio will continue to offer its log management services as a separate product beyond the CrowdStrike acquisition. More

  • in

    Weathering the storm: What public utilities can learn from cloud computing

    We think of cloud services as a creation of the modern digital world, but one of the first cloud services was installed almost 140 years ago, on Pearl Street in Manhattan, just south of Fulton Street.

    On September 4, 1882, the Pearl Street Station began providing light for 400 lamps and 82 customers. Like many cloud services, Pearl Street Station grew. Within two years, it was providing power for 500 customers, powering more than 10,000 lamps.
    Also: Top cloud providers in 2021: AWS, Microsoft Azure, and Google Cloud, hybrid, SaaS players
    Rather than each individual customer building power-generating infrastructure, they all relied on this one centralized service. This service was the Edison Illuminating Company, which would eventually become Con Ed (Consolidated Edison), the $12B public utility that today provides power to most of New York City and Westchester County.
    A quick update
    This article was originally written in 2016 and was centered around discussion of Hurricane Matthew. In 2017, my wife and I left Florida permanently when our home was once again at ground zero, this time for Hurricane Irma. That storm left our home without power for more than a week.
    We now live in Oregon, which is not without its own natural disasters. Last year, fellow Oregonians suffered terrible losses due to the 2020 wildfires. My family was spared, but many of our neighbors were not. And that’s not even considering the whole 2020 pandemic thing, which we’re all depressingly familiar with.
    Just this weekend, we lost power for what seemed like a very long time. Oregon had a “once in a generation” ice storm, dumping 1.25 inches of ice on everything. The normal ice load most infrastructure can handle is about 0.25 inches, and the added weight downed trees, powerlines, and communications everywhere. Oregon public utilities worked miracles, but even with hundreds or thousands of technicians a state like Oregon is pretty large.

    Texas is even larger and its big chill made things even worse for Texas residents, many of whom were completely unprepared for wintry weather. Texas, too, experienced a crippling power outage, and more than 4 million Texas residents lost power. By contrast, here in Oregon, barely an eighth of that amount were lighting candles.
    Also: Extreme weather forecast? Essential gear for when the power goes out
    The rest of this article will take you back to 2016 and is written from the perspective of someone who just lived through a hurricane. But the ideas presented about the power grid apply now, and apply to other natural disasters like wildfires and ice storms.
    The bottom line, of course, is we wish you the best. Hang in there.
    Shared characteristics
    Where I live, in Central Florida, we rely on another regulated public utility, Florida Power and Light (FPL). Both of these utilities share a lot of the characteristics we’re used to in IT cloud services.
    We’ve obviously devoted a lot of virtual ink to cloud computing, so I’m not going to rehash all the elements here. But it’s important to realize that both cloud computing providers and public utility providers are the keepers of the physical infrastructure. In the case of cloud computing, that’s servers, storage, and network. In the case of public utility providers, that’s power generation, power storage, and power distribution.
    On one hand, that’s great for consumers of these centralized resources. If you want to start your own online application business, you no longer need to build out a physical infrastructure. Back in the late 1990s (before we had real cloud access), I did that. Each time I reached a server’s max capabilities, scaling involved a major investment to jump to the next level. But with cloud, you just scale smoothly, with only incremental expense.
    Likewise, with power, I don’t have to build and maintain my own onsite generator, and figure out a way to manage and safeguard fuel deliveries. If I use a bit more, or a bit less, power each month, it’s simply reflected in my bill.
    Also: How to protect your IT power from deep-freeze disasters
    These services generally work reliably and consistently. I can check my Gmail, for example, without worrying about servers and infrastructure. I can brew a pot of coffee without worrying about whether or not the generator has been topped off with fuel.
    We, as a society, have come to rely on cloud and public utility services to such an extent that they actually define our civilization. When these services fail, our lifestyle stutters. For example, when access to Facebook or Gmail goes down, we suddenly feel disconnected from our friends and colleagues. Sometimes, we’re unable to complete work on time, or stay in touch for critical communications.
    When the power goes out, everything comes to a halt. There’s no air conditioning, no lights, no food preservation. Nothing.
    Blasted back to the Stone Age
    In most cases, failures are brief. They last a few hours, at most. But Hurricane Matthew blasted Central Florida back to the Stone Age. It wasn’t pretty.
    Those of us who live on the southeast coast of the United States knew Matthew (which was a Category 4 storm, based on the Saffir-Simpson Hurricane Wind Scale) was coming for almost a week. It was due to hit the Space Coast (where I live) on Friday morning.
    My wife and I spent Tuesday, Wednesday, and Thursday preparing the house. We disassembled the workshop, and turned it back into a reinforced garage to make room for one car. We moved our second car to the garage at my parents’ old house. We battened down all the hurricane shutters. We filled tubs and gallon jugs with water. We did our best to prepare.
    By midnight on Thursday, we’d mostly finished our preparations. Our nerves were on edge. To distract ourselves as the storm approached, we decided to binge-watch the latest season of Game of Thrones. Power dropped out for a few minutes during episode one, and another few minutes during episode two. We kept glancing at an app on my phone to watch the track of the hurricane’s eye.

    Read this

    Video: IT heroes of Hurricane Sandy
    ZDNet interviewed a panel of IT heroes who kept their organizations running During Hurricane Sandy with successful disaster recovery plans. Watch the full 40-minute panel discussion as these IT leaders share the lessons they learned.
    Read More

    At 4:43am, about 20 minutes into episode three of Game of Thrones, the power went off for the final time. The storm had arrived in full force.
    We were terrified, because all the track maps essentially showed the worst of the storm, including 140 mph winds, making landfall pretty much on top of us. Fortunately, that didn’t happen. The storm mostly missed us, with the eye about 20 miles offshore. Even so, we were hit by winds in excess of 70-90 miles per hour. By about 7am, the worst of the storm had passed.
    It was still unsafe to go outside, or even open up the internal windows behind the storm shutters. So we had no air flow in the house until about 3pm Friday. We tried to sleep. When the wind finally died down, and the limited LTE I still had on my iPhone showed that the storm had tracked northward, we removed the first of our fortifications (the shutters over our front door). We stepped outside into the fresh air.
    We were very fortunate. Our house sustained no damage. A neighbor’s fence had blown down. A home down the street had some roof damage. No one was hurt.
    But we had no power. On Friday, with the winds from the tail end of the storm still surging, at least we had a breeze. We thanked goodness that the supernatural heat of the summer was behind us. We opened our windows, and we lit some candles. The cross-breeze helped a little.
    Verizon’s LTE was barely functional, so getting information was nearly impossible. We had no idea when power would be restored. My iPhone was down to 50 percent. Normally, I’m very happy with the battery on the iPhone 6s Plus. But with no idea when I’d be able to recharge, I started to get nervous. We had some D-cell batteries for the fans, but, again, we didn’t know how long we’d be without power.
    All we could think about was how long we’d have to go without power, and how the hell we’d make it for however long it would be. When we finally regained some level of internet connectivity on the phone, the FPL status update site merely said that they were working hard to restore power. No time estimate was possible.
    On Saturday afternoon, power did come back on… for 20 seconds. The lights came on, and I almost teared up with relief. It was short-lived relief. All of a sudden there was a boom. The lights went back off. A nearby transformer had exploded, probably from debris across the connectors. It would be another day before we got power back again.
    All told, we were without power for three days. I’m ashamed to say that I didn’t take my forceable removal from civilization well. I was miserable, uncomfortable, desperate, and a little crazy. I couldn’t sleep. I didn’t eat much. There was nothing to do, nothing to work on, and — as a person who usually enjoys the illusion that I’m very much in control of my own destiny — nothing I could do to improve our situation.
    We simply had to wait
    This is the problem with centralized services like cloud services and public utilities. The convenience, scalability, cost-savings, reduced maintenance, and general reliability come at the cost of self-determination. If those services fail, they take you down with them.
    This is why, with cloud computing services, we often talk about redundancy, and keeping local backups. We can also employ a similar strategy with public utilities, although the implementation is much more complex, much more costly, and much less reliable.
    I do not own a generator. I regretted that a lot over this very long weekend. However, while there are relatively inexpensive generators available, one that can power A/C for a longer duration is very large and incredibly expensive. Worse, there’s the question of how to safely store the fuel during the storm, and whether the actual generator will survive the pounding of the storm.
    A similar concern exists for solar power. It would be great to put up solar cells and not have to pay the monthly power bill at all. But in a hurricane-prone area, solar cells are likely to be torn off the roof before they can provide the emergency power they’re intended for. It’s kind of a Catch-22.
    The real answer is that the public utilities, the power companies, need to implement more robust power distribution mechanisms.
    Okay, let me stop here for a moment. Before I criticize FPL and its ilk, I want to give a huge shout-out to all the very hardworking repair teams who restored our power over the weekend. I spoke to some of the guys working the lines, and they told me they’d been flown in from out of state before the storm. They worked their way up the state, restoring power county-by-county, city-by-city. They had had almost no sleep for days, while having to work with live power lines in 90-degree heat. They’re champions and heroes.
    Demand a better solution
    That said, this is not how it should work. All our power lines (and broadband lines, for that matter) are exposed and hanging. This is unconscionable. The power services know that we’re prone to hurricanes, yet they allow these lines to remain open and exposed.
    Image: David Gewirtz
    Worse, they’re often poorly maintained during non-emergency times. The picture you see to the right is the transformer behind my house. Notice all the overgrowth? If a branch crosses over the connectors, that transformer will either spark or explode. It’s already exploded once. And yet, that’s how FPL distributes power in an area prone to wind storms.
    Can you imagine such irresponsibility among cloud computing providers? It’s as if, knowing what they do about the prevalence of hackers and infiltrators, Google just didn’t bother using firewalls, intrusion prevention, or even password security. It’s as if Google’s entire cybersecurity strategy was “eh, call us when you’re hacked, and we’ll fix it when we get to it.”
    No one would tolerate such a thing. But that’s because Google has competition, which keeps it agile and competitive. We’re stuck with our single power provider, FPL, who has no competition. As such, they can choose to prioritize repairs using a system that’s essentially waiting to see what breaks, rather than building in any preventive infrastructure.
    There is no way, yet, to prevent these terrible storms. But the damage due to the storm is often the result of a failure in infrastructure planning, maintenance, or investment, not due to acts of Mother Nature. Katrina was a bad storm, to be sure. But it was the failure to maintain the levees protecting New Orleans that was the cause of most of the damage.
    Here in Brevard County, we have a little over 300,000 power customers. Friday night, more than 200,000 of them were without power. By Saturday night, 100,000 were still without power. And even today, after Friday, Saturday, Sunday, and now Monday, some of our friends are still waiting to have their power restored.
    It’s not that FPL didn’t have repair escalation plans in place, or dedicated workers. They did. Those folks are fantastic. They emergency response was well executed. But infrastructure as poorly maintained as the transformer in my back yard does not show an ongoing dedication to emergency prevention and disaster mitigation.
    We allow our public utilities to be monopolies because of the enormous investments required to deliver service to all customers. We regulate them because they’re monopolies. But we don’t do enough to demand that they harden and protect their infrastructure — and that’s because they don’t have any competition.
    If FPL had competition, the way Google has to compete against Microsoft and AWS, you can be sure we’d not only have spent this weekend in cool comfort, we’d probably spend a lot less each month on the services we do get.
    Perhaps as companies like Tesla (and even Apple) develop more robust battery technology, we can replace generators and solar cells with highly efficient in-structure batteries. Then, maybe we’ll be able to withstand five days without power from the grid, simply by tapping into our own private pool of battery power.
    Or, perhaps, similar to the way the internet has disrupted other forms of infrastructure, we’ll start to see new and innovative ways we can produce our own energy. Perhaps we will be able to replace the service we get from the grid or, at the very least, have an alternate source available for when storms like Matthew hit an entire region.
    You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV. More

  • in

    Private firms can't protect us from digital attacks. Government must step in.

    Unless you’ve been living under a rock, you know that our digital infrastructure is under attack. ZDNet’s excellent security coverage has daily updates, usually with names I’ve never heard of before. As the ZDNet security tagline says, “Let’s face it. Software has holes. And hackers love to exploit them. New vulnerabilities appear almost daily.” 

    ZDNet Recommends

    Sadly, that’s not hyperbole. “SolarWinds attack is not an outlier, but a moment of reckoning for security industry, says Microsoft exec” is a recent headline. 
    Vasu Jakkal, Microsoft’s corporate vice president of security, compliance and identity, said,

    “These attacks are going to continue to get more sophisticated. So we should expect that. This is not the first and not the last. This is not an outlier. This is going to be the norm. This is why what we do is more important than ever. I believe that SolarWinds is a moment of reckoning in the industry. This is not going to change and we have to do better as a defender community and we have to be unified in our responses.”

    But Ms. Jakkal is wrong. Private enterprise can’t handle serious, nation state digital aggression. Nations have the resources and patience to pursue long term strategies. Even the largest corporations lack the heft of a nation.
    Microsoft estimates that at least 1,000 engineers were needed to develop the SolarWinds hack. What company, what consortium of companies, could devote similar resources? 
    We don’t send defense contractors to fight wars. We send armed forces, backed by intelligence agencies and diplomacy – as well as the weapons defense contractors develop – to defeat the enemy.  
    Digital aggression is aggression
    Scale changes everything is a Silicon Valley truism. Back when the Internet’s predecessor, ARPAnet, was five nodes, there was no money in digital crime.

    Now the Internet is five billion nodes. Deep into the transition to a digital civilization, crime is following the money. The thieves, gangs, and nation-state bad actors are stealing everything that isn’t locked down. Money, industrial secrets, intelligence assets, and personal data.
    There’s no end in sight since “software engineering” is an oxymoron. As Randall Munroe had a software writer say on xkcd.com: “. . . our entire field is bad at what we do, and if you rely on us, everyone will die.” We don’t know how to build a digital dike that doesn’t leak. We can only plug holes after the bad guys find them.
    Strategically, deterrence seems to be the only option for persuading nation states to back off. And only a strong nation can persuade another nation to chill, as the Cold War showed. 

    Likewise, today’s Internet needs a police force as well. The Internet is borderless, so a global force is needed to bring the criminals to heel.
    Despite massive private investment in digital security, the stakes keep rising and the hacks are getting worse. Private enterprise isn’t working. Private efforts to coordinate across organizations to record and analyze attacks are not enough.
    Can the US government take this on?
    Don’t reflexively dismiss the idea that government could handle this. Consider the US armed forces, the world’s most powerful fighting force. Handsomely funded, well-trained, and constantly analyzing the threats America faces. That’s a blueprint for US Digital Defense Force.
    Perhaps you recoil at the thought of higher taxes to pay for the DDF. But the choice isn’t between no taxes and higher taxes. Criminals and nation-states – in Russia, they may be one and the same – are already collecting massive taxes to fund their aggression. The choice is essentially between paying for digital order and security, or paying the criminals.

    The take
    America’s adversaries are actively probing our infrastructure for vulnerabilities. America’s superiority in conventional forces – for now anyway – makes a big shooting war unlikely. But crippling America’s government, power, water, energy, and medical systems all at once would help even the odds if someone wanted to take us down.
    The current model of digital security isn’t working, nor is there a plan to fix it. Sorry Microsoft, you – and the rest of the private firms – don’t have the chops to take on Russia, Iran, and North Korea. 
    We’ve been here before. London in the early 1800s was a city of 1.3 million people with no central police force. In 1829 Parliament established the Metropolitan Police to bring order and security. Private firms and wealthy individuals had guards, but that was not enough.
    Like 1820s London, we need to be a well-funded and trained force to stop digital muggers, gangs, and conspiracies, whether private or nation sponsored. And our government to make it clear that countries that mess with our digital infrastructure will face painful consequences.
    Comments welcome. If you don’t like the government idea, what would you do instead? More

  • in

    SolarWinds attack hit 100 companies and took months of planning, says White House

    The White House team leading the investigation into the SolarWinds hack is worried that the breach of 100 US companies has the potential to make the initial compromise a headache in future.
    Anne Neuberger, deputy national security advisor for Cyber and Emerging Technology at the White House, said in a press briefing that nine government agencies were breached while many of the 100 private sector US organizations that were breached were technology companies. 

    More on privacy

    “Many of the private sector compromises are technology companies including networks of companies whose products could be used to launch additional intrusions,” said Neuberger, a former director of cybersecurity at the National Security Agency.
    SEE: Network security policy (TechRepublic Premium)
    Attackers that the US says are of “likely Russian origin” had compromised the software build system of US software vendor SolarWinds and planted the Sunburst backdoor in its widely used Orion product for monitoring enterprise networks.   
    That 100 private sector firms were breached in the attack paints a different picture to what was known in December, when Microsoft and FireEye, that were both breached, disclosed the attack. 
    At that stage there were eight federal agencies confirmed to have been breached, including the US Treasury Department, the Department of Homeland Security, the US Department of State, the US Department of Energy, and the National Nuclear Security Administration.   

    However, back then Microsoft and FireEye were the two most significant private sector companies known to have been compromised by the tainted Orion update (the Orion updates weren’t the only way that companies were infiltrated during the campaign, which also involved the hackers gaining access to cloud applications).
    “When there is a compromise of this scope and scale both across government and across the US technology sector to lead to follow-on intrusions, it is more than a single incident of espionage. It’s fundamentally of concern for the ability of this to become disruptive,” Neuberger explained during questioning. 

    ZDNet Recommends

    She stressed that the attackers were “advanced” because the “level of knowledge they showed about the technology and the way they compromised it truly was sophisticated.”
    “As a country we chose to have both privacy and security, so the intelligence community largely has no visibility into private sector networks. The hackers launched the hack from inside the United States, which further made it difficult for the US government to observe their activities,” she said.
    Microsoft president Brad Smith told 60 Minutes last week that it was “probably fair to say that this is the largest and most sophisticated attack the world has ever seen.”
    SEE: How do we stop cyber weapons from getting out of control?
    Smith previously said the attackers “used a technique that has put at risk the technology supply chain for the broader economy.”
    “We believe it took [the attackers] months to plan and execute this compromise. It’ll take us some time to uncover this, layer by layer,” said Neuberger.
    Neuberger said she expected the investigation, as well as identification and remediation of affected networks, would take months but not years to complete. 
    [embedded content] More

  • in

    Windows and Linux servers targeted by new WatchDog botnet for almost two years

    Due to the recent rise in cryptocurrency trading prices, most online systems these days are often under the assault of crypto-mining botnets seeking to gain a foothold on unsecured systems and make a profit for their criminal overlords.

    The latest of these threats is a botnet named WatchDog. Discovered by Unit42, a security division at Palo Alto Networks, this crypto-mining botnet has been active since January 2019.
    Written in the Go programming language, researchers say they’ve seen WatchDog infect both Windows and Linux systems.
    The point of entry for their attacks has been outdated enterprise apps. According to an analysis of the WatchDog botnet operations published on Wednesday, Unit 42 said the botnet operators used 33 different exploits to target 32 vulnerabilities in software such as:
    Drupal
    Elasticsearch
    Apache Hadoop
    Redis
    Spring Data Commons
    SQL Server
    ThinkPHP
    Oracle WebLogic
    CCTV (currently unknown if the target is a CCTV appliance or if there is another moniker “cctv” could stand for).
    Based on details the Unit42 team was able to learn by analyzing the WatchDog malware binaries, researchers estimated the size of the botnet to be around 500 to 1,000 infected systems.
    Profits were estimated at 209 Monero coins, currently valued at around $32,000, but the real figure is believed to be much higher since researchers only managed to analyze a few binaries, and the WatchDog gang is thought to have used many more Monero addresses to collect their illegal crypto-mining funds.
    No credentials theft observed
    The good news for server owners is that WatchDog is not yet on par with recent crypto-mining botnets like TeamTNT and Rocke, which in recent months have added capabilities that allow them to extract credentials for AWS and Docker systems from infected servers.

    However, the Unit42 team warns that such an update is only a few keystrokes away for the WatchDog attackers.
    On infected servers, WatchDog usually runs with admin privileges and could perform a credentials scan & dump without any difficulty, if its creators ever wished to.
    To protect their systems against this new threat, the advice for network defenders is the same that security experts have been giving out for the past decade — keep systems and their apps up to date to prevent attacks using exploits for old vulnerabilities. More

  • in

    Masslogger Trojan reinvented in quest to steal Outlook, Chrome credentials

    A variant of the Masslogger Trojan is being used in attacks designed to steal Microsoft Outlook, Google Chrome, and messenger service account credentials. 

    On Wednesday, cybersecurity researchers from Cisco Talos said the campaign is currently focused on victims in Turkey, Latvia, and Italy, expanding activities documented in late 2020 which targeted users in Spain, Bulgaria, Lithuania, Hungary, Estonia, and Romania. 
    It appears that targets are changing on close to a monthly basis.
    Masslogger was first spotted in the wild in April 2020 under licensing agreements agreed in underground forums. However, the new variant is considered “notable” by Talos due to the use of a compiled HTML file format to trigger an infection chain. 
    Threat actors begin their attacks in a typical way, which is through phishing emails. In this attack wave, phishing messages masquerade as business-related queries and contain .RAR attachments. 
    If a victim opens the attachment, they are split into multi-volume archives with the “r00” extension, a feature the researchers believe could be an effort to “bypass any programs that would block [an] email attachment based on its file extension.”
    A compiled HTML file, .CHM — the default format for legitimate Windows Help files — is then extracted which contains a further HTML file with embedded JavaScript code. At each stage, code is obfuscated, and eventually leads to a PowerShell script being deployed that contains the Masslogger loader. 

    The Masslogger Trojan variant, designed for Windows machines and written in .NET, will then begin the exfiltration of user credentials and is not picky in its targets — both home users and businesses are at risk, although it appears the operators are focusing on the latter. 
    After being stored in memory as a buffer, compressed with gzip, the malware begins harvesting credentials. Microsoft Outlook, Google Chrome, Firefox, Edge, NordVPN, FileZilla, and Thunderbird are among the applications targeted by the Trojan. 
    Stolen information can be sent through SMTP, FTP, or HTTP channels. Information uploaded to an exfiltration server includes the victim’s PC username, country ID, machine ID, and a timestamp, as well as records relating to configuration options and running processes. 
    “The observed campaign is almost entirely executed and present only in memory, which emphasizes the importance of conducting regular and background memory scans,” Talos says. “The only component present on disk is the attachment and the compiled HTML help file.”
    The researchers note that Masslogger is also able to act as a keylogger, but in this variant, it appears that the keylogging functionality has been disabled. 
    Cisco Talos believes that based on Indicators of Compromise (IoCs), the cyberattackers can also be linked to the past usage of AgentTesla, Formbook and AsyncRAT Trojans. 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Labor calls for an Australian ransomware strategy

    Two Labor shadow ministry members have called for a national ransomware strategy, one they say is aimed at reducing the number of such attacks on Australian targets.
    In a report [PDF] prepared by Shadow Minister for Home Affairs Kristina Keneally and Shadow Assistant Minister for Communications Tim Watts, Labor declared that due to ransomware being the biggest threat facing Australia, it’s time for a strategy to thwart it.
    “Australia needs a comprehensive National Ransomware Strategy designed to reduce the attractiveness of Australian targets in the eyes of cyber criminals,” the report said. 
    “None of these interventions are silver bullets. But the threat of ransomware isn’t going anywhere soon, and the government cannot leave it to Australian organisations to confront this challenge alone.”
    The report pointed to the Australian government’s underwhelming cybersecurity strategy that was published in August.
    “[It] rightly identifies that individual organisations have the primary responsibility for securing their own networks against any cyber threat, including ransomware. However, this is far from the end of the story,” the report said.
    It also said the government has a range of policy tools that only it can deploy in an effort to reduce the overall volume of ransomware attacks, such as regulation making, law enforcement, diplomacy, international agreement making, offensive cyber operations, as well as the imposition of sanctions.

    “While individual organisations will always be primarily responsible for securing their own networks, governments can intervene strategically to shape the overall threat environment in ways that make Australian targets less attractive,” it continued.
    One suggestion the report has made is for the Australian government to pursue an approach that seeks to alter the return on investment of ransomware groups that target Australian organisations.
    “To do this, it should pursue a range of initiatives designed to increase the costs of mounting campaigns against Australian organisations and to reduce the returns that are realised from such campaigns,” it said.
    “The Australian government has tools that it can use to impose costs on ransomware crews that target Australians, including law enforcement action, targeted international sanctions, and offensive cyber operations.”
    Additionally, the report said that while Australian law enforcement agencies have been part of some significant international cybercrime cooperation success stories, Australian law enforcement agencies need to be more aggressively and visibly involved in international operations against ransomware operators and pursuing those who target Australia.
    It said that in the event where there is no prospect for law enforcement action against ransomware crews, Australia should seek to impose costs on ransomware crews that target Australian organisations by seeking to disrupt their activities through offensive cyber operations.
    Labor also believes there is more that Australia could be doing to develop cybercrime prevention programs, such as using existing aid programs to develop diversion programs and developing skilled migration pathways for “young, technically savvy people” in the greater Indo-Pacific region.
    Another way the shadow ministers believe the government could seek to reduce the returns of ransomware attacks on Australian organisations is by targeting cryptocurrency exchanges that enable ransomware payments.
    “Cryptocurrencies have been a crucial enabling technology for the growth of ransomware by providing a system for the payment of ransoms that is anonymous and outside existing global payments architecture,” they wrote. “The absence of a central organisation controlling cryptocurrencies has made the enforcement of existing ‘know your customer’ anti-money laundering laws far more challenging in this context.”
    The report concludes by stating that perhaps the simplest way to reduce the returns of ransomware attacks on Australian organisations is to lift the overall level of resilience of the IT networks of Australian organisations.
    Elsewhere, head of information warfare at the Australian Department of Defence Major General Susan Coyle used her appearance at IBM Think Australia and New Zealand on Thursday to say it’s important to patch systems and change passwords frequently.
    “First and foremost, we’ve got to accept that there is a risk, thinking that there isn’t a risk makes us more complacent,” she said.
    HERE’S MORE More

  • in

    Defence lists cyber mitigation as key factor for building ethical AI

    The Australian Department of Defence has released a new report on its findings for how to reduce the ethical risk of artificial intelligence projects, noting that cyber mitigation will be key to maintaining the trust and integrity of autonomous systems.
    The report was drafted following concerns from Defence that failure to adopt emerging technologies in a timely manner could result in military disadvantage, while premature adoption without sufficient research and analysis could result in inadvertent harms.
    “Significant work is required to ensure that introducing the technology does not result in adverse outcomes,” Defence said in the report [PDF].
    The report is the culmination of a workshop held two years ago, which saw organisations, including Defence, other Australian government agencies, the Trusted Autonomous Systems Defence Cooperative Research Centre, universities, and companies from the defence industry come together to explore how to best develop ethical AI in a defence context.
    In the report, participants have jointly created five key considerations — trust, responsibility, governance, law, traceability — that they believe are essential during the development of any ethical AI project.
    When explaining these five considerations, workshop participants said all AI defence projects needed to have the ability to defend themselves from cyber attacks due to the growth of cyber capabilities globally.
    “Systems must be resilient or able to defend themselves from attack, including protecting their communications feeds,” the report said.

    “The ability to take control of systems has been demonstrated in commercial vehicles, including ones that still require drivers but have an ‘internet of things’ connection. In a worst-case scenario, systems could be re-tasked to operate on behalf of opposing forces.”
    Workshop participants added there is a risk that a lack of investment in sovereign AI could impact Australia’s ability to achieve sovereign decision superiority.
    As such, the participants recommended increasing early AI education to military personnel to improve the ability for defence to act responsibly when working with AI.
    “Without early AI education to military personnel, they will likely fail to manage, lead, or interface with AI that they cannot understand and therefore, cannot trust,” the report said. “Proactive ethical and legal frameworks may help to ensure fair accountability for humans within AI systems, ensuring operators or individuals are not disproportionately penalised for system-wide and tiered decision-making.”
    The report also endorsed investment into cybersecurity, intelligence, border security and ID management, investigative support and forensic science, and for AI systems to only be deployed after demonstrating effectiveness through experimentation, simulation, or limited live trials.
    In addition, the report recommended for defence AI projects to prioritise integration with already-existing systems. It provided the example of automotive vehicle automation as it provides collision notifications, blind-spot monitoring, among other things that support human driver cognitive functions.
    The workshop members also created three tools that were designed to support AI project managers with managing ethical risks.
    The first two tools are an ethical AI defence checklist and ethical AI risk matrix, which can be found on the Department of Defence’s website.
    Meanwhile, the third tool is an ethical risk assessment for AI programs that require a more comprehensive legal and ethical program plan. Labelled as the Legal and Ethical Assurance Program Plan (LEAPP), the assessment requires AI project managers to describe how they will meet the Commonwealth’s legal and ethical assurance requirements.
    The LEAPP requires AI project managers to create a document with information, such as legal and ethical planning, progress and risk assessment, and input into Defence’s internal planning, including weapons reviews. Once written, this assessment would then be sent for review and comment by Defence and industry stakeholders before it is considered for Defence contracts. 
    As the findings and tools from the report are only recommendations, the report did not specify what AI defence projects fit within the scope of the LEAPP assessment.  
    Related Coverage More