More stories

  • in

    Every country must decide own definition of acceptable AI use

    Every country including Singapore will need to decide what it deems to be acceptable uses of artificial intelligence (AI), including whether the use of facial recognition technology in public spaces should be accepted or outlawed. Discussions should seek to balance market opportunities and ensuring ethical use of AI, so such guidelines are usable and easily adopted. Above all, governments should seek to drive public debate and gather feedback so AI regulations would be relevant for their local population, said Ieva Martinkenaite, head of analytics and AI for Telenor Research. The Norwegian telecommunications company applies AI and machine learning models to deliver more personalised customer and targeted sales campaigns, achieve better operational efficiencies, and optimise its network resources. For instance, the technology helps identify customer usage patterns in different locations and this data is tapped to reduce or power off antennas where usage is low. This not only helps lower energy consumption and, hence, power bills, but also enhances environmental sustainability, Martinkenaite said in an interview with ZDNet.The Telenor executive also chairs the AI task force at GSMA-European Telecommunications Network Operators’ Association, which drafts AI regulation for the industry in Europe, transitioning ethics guidelines into legal requirements. She also provides input on the Norwegian government’s position on proposed EU regulatory acts.

    Singapore wants widespread AI use in smart nation drive

    With the launch of its national artificial intelligence (AI) strategy, alongside a slew of initiatives, the Singapore government aims to fuel AI adoption to generate economic value and provide a global platform on which to develop and testbed AI applications.

    Read More

    Asked what lessons she could offer Singapore, which last October released guidelines on the development of AI ethics, Martinkenaite pointed to the need for regulators to be practical and understand the business impact of legislations.Frameworks on AI ethics and governance might look good on paper, but there also should be efforts to ensure these were usable in terms of adoption, she said. This underscored the need for constant dialogue and feedback as well as continuous improvement, so any regulations remained relevant.For one, such guidelines should be provided alongside AI strategies, including the types of business and operating models the country should pursue and highlights of industries that could best benefit from its deployment. 

    In this aspect, she said EU and Singapore had identified strategic industries they believed the use of data and AI could scale. These sectors also should be globally competitive and where the country’s largest investments should go.  Singapore in 2019 unveiled its national AI strategy to identify and allocate resources to key focus areas, as well as pave the way for the country, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. These included manufacturing, finance, and government. In driving meaningful adoption of AI, nations should strive to look for “balance” between tapping market opportunities and ensuring ethical use of the technology.Noting that technology was constantly evolving, she said it also was not possible for regulations to always keep up. In drafting the region’s AI regulations, EU legislators also grappled with several challenges including how laws governing the ethical use of AI could be introduced without impacting the flow of talent and innovation, she explained. This proved a significant obstacle as there were worries regulations could result in excessive red tape, with which companies could find difficult to comply.There also were concerns about increasing dependence on IT infrastructures and machine learning frameworks that were developed by a handful of internet giants, including Amazon, Google, and Microsoft as well as others in China, Martinkenaite said.She cited disconcert amongst EU policy makers on how the region could maintain its sovereignty and independence amidst this emerging landscape. Specifically, discussions revolved around the need to create key enabling technologies in AI within the region, such as data, compute power, storage, and machine learning architectures. With this focus on building greater AI technology independency, it then was critical for EU governments to create incentives and drive investments locally in the ecosystem, she noted. Rising concerns about the responsible use of AI also were driving much of the discussions in the region, as they were in other regions such as Asia, she added. While there still was uncertainty over what the best principles were, she stressed the need for nations to participate in the discussions and efforts in establishing ethical principles of AI. This would bring the global industry together to agree on these principles should be and to adhere to them. United Nation’s human rights chief Michelle Bachelet recently called for the use of AI to be outlawed if they breached international human rights law. She underscored the urgency in assessing and addressing the risks AI could bring to human rights, noting that stricter legislation on its use should be implemented where it posed higher risks to human rights. Bachelet said: “AI can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”The UN report urged governments to take stronger action to keep algorithms under control. Specifically, it recommended a moratorium on the use of biometric technologies, including facial recognition, in public spaces until, at least, authorities were able to demonstrate there were no significant issues with accuracy or discriminatory impacts. These AI systems, which increasingly were used to identify people in real-time and from a distance and potentially enabled unlimited tracking of individuals, also should comply with privacy and data protection standards, the report noted. It added that more human rights guidance on the use of biometrics was “urgently needed”. Finding balance between ethics, business of AITo better drive adoption of ethical AI, Martinkenaite said such guidelines should be provided alongside AI strategies, including the types of business and operating models the country should pursue and highlights of industries that could best benefit from its deployment. In this aspect, she said EU and Singapore had identified strategic industries they believed the use of data and AI could scale. These sectors also should be globally competitive and where the country’s largest investments should go.  Singapore in 2019 unveiled its national AI strategy to identify and allocate resources to key focus areas, as well as pave the way for the country, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. These included manufacturing, finance, and government. In driving meaningful adoption of AI, nations should strive to look for “balance” between tapping market opportunities and ensuring ethical use of the technology.Martinkenaite noted that governments and regulators worldwide would have to determine what it meant to be ethical in their country’s use of AI and ability to track data as its application was non-discriminatory. This would be pertinent especially in discussions on the risk AI could introduce in certain areas, such as facial recognition. While any technology in itself was not necessarily bad, its use could be deemed to be so, she said. AI, for instance, could be used to benefit society in detecting criminals or preventing accidents and crimes. There were, however, challenges in such usage amidst evidence of discriminatory results, including against certain races, economic classes, and gender. These could pose high security or political risks. Martinkenaite noted that every country and government then needed to decide what were acceptable and preferred ways for AI to be applied by its citizens. These included questions on whether the use of AI-powered biometrics recognition technology on videos and images of people’s faces for law enforcement purposes should be accepted or outlawed. She pointed to ongoing debate in EU, for example, on whether the use of AI-powered facial recognition technology in public places should be completely banned or used only with exceptions, such as in preventing or fighting crime. The opinions of its citizens also should be weighed on such issues, she said, adding that there were no wrong or right decisions here. These were simply decisions countries would have to decide for themselves, she said, including multi-ethnic nations such as Singapore. “It’s a dialogue every country needs to have,” she said.Martinkenaite noted, though, that until veracity issues related to the analysis of varying skin colours and facial features were properly resolved, the use of such AI technology should not be deployed without any human intervention, proper governance or quality assurance in place.She urged the need for continual investment in machine learning research and skillsets, so the technology could be better and become more robust. She noted that adopting an ethical AI strategy also could present opportunities for businesses, since consumers would want to purchase products and services that were safe and secured and from organisations that took adequate care of their personal data. Companies that understood such needs and invested in the necessary talent and resources to build a sustainable AI environment would be market differentiators, she added. A FICO report released in May revealed that nearly 70% of 100 AI-focused leaders in the financial services industry could not explain how specific AI model decisions or predictions were made. Some 78% said they were “poorly equipped to ensure the ethical implications of using new AI systems”, while 35% said their organisation made efforts to use AI in a transparent and accountable way. Almost 80% said they faced difficulty getting their fellow senior executives to consider or prioritise ethical AI usage practices, and 65% said their organisation had “ineffective” processes in place to ensure AI projects complied with any regulations.  RELATED COVERAGE More

  • in

    US and EU to cooperate on tech standards, supply chain security and tech development

    Image: Getty Images
    The United States and the European Union have started work on coordinating approaches across various technology areas, including AI and semiconductors, and tackling non-market policies that result in the misuse of technology.The plan was created on Wednesday after US and EU representatives, including US President Joe Biden and European Commission Vice Presidents Valdis Dombrovskis and Margrethe Vestager, met for the first time as part of the new US-EU Trade and Technology Council (TTC).The US-EU Trade and Technology Council launched in June as part of efforts to ensure sensitive technologies are not misused and cyber attacks can be prevented. At the time, the council agreed to create 10 working groups focused on addressing various technological and trade issues.”Future conflicts will be fought very differently. The fight over tech will be the new battleground of geopolitics. Security also means that we need to keep an eye on what we export and who is investing in our economies. And what they are investing in. Here, our aim is to strive for convergent export control approaches on sensitive dual-use technologies.,” Dombrovskis said prior to the inaugural TTC meeting.After the meeting, the EU and US said in a joint statement the council would look to address the misuse of technology and protect societies from information manipulation and interference.In the joint statement, the council also provided more details on what the 10 working groups will do. Among the working groups, there are ones that will focus on developing technology standards, advancing supply chain security, developing finance for secure and resilient digital connectivity in third-world countries, data governance, combatting arbitrary or unlawful surveillance, export controls, investment screening with a focus on sensitive technologies and related sensitive data, and promoting access of digital tools for small and medium-sized businesses.While China was not mentioned as being part of the council’s meeting agenda, one of the working groups created by the TTC will specifically focus on addressing challenges from non-market economic policies and practices that distort trade. The council listed examples of these non-market practices as including forced technology transfer, state-sponsored theft of intellectual property, market-distorting industrial subsidies, and discriminatory treatment of foreign companies.

    “We intend to cooperate on the development and deployment of new technologies in ways that reinforce our shared democratic values, including respect for universal human rights, advance our respective efforts to address the climate change crisis, and encourage compatible standards and regulations,” the EU and US said in a joint statement.Geopolitical movements, specifically around trade and technology, have been on the rise. The Quad earlier this week announced various non-military technology initiatives aimed at establishing global cooperation on critical and emerging technologies, such as AI, 5G, and semiconductors.Australia, the US, and the UK also recently established the AUKUS security pact, which is aimed at addressing defence and security concerns posed by China within the Indo-Pacific region through defense-related science and technological means. AUKUS’ first initiative is helping Australia acquire nuclear-powered submarines.Like the TTC, both the Quad and AUKUS took indirect swipes at China when announcing their respective sets of new initiatives.Meanwhile, China has formally applied to join the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), one of the world’s largest trade pacts. Taiwan, which has similarly applied to join the CPTPP, has accused China of only making the application to block Taiwan from entering international trade blocs.RELATED COVERAGE More

  • in

    Researchers discover bypass 'bug' in iPhone Apple Pay, Visa to make contactless payments

    UK academics have uncovered mobile security issues in Visa and Apple payment mechanisms that could result in fraudulent contactless payments.

    On Thursday, academics from the UK’s University of Birmingham and University of Surrey revealed the technique, in which attackers could bypass an Apple iPhone’s lock screen to access payment services and make contactless transactions.  A paper on the research, “Practical EMV Relay Protection,” (.PDF) is due to be published at the 2022 IEEE Symposium on Security and Privacy, and has been authored by Andreea-Ina Radu, Tom Chothia, Christopher J.P. Newton, Ioana Boureanu, and Liqun Chen. According to the paper, the ‘vulnerability’ occurs when Visa cards are set up in Express Transit mode in an iPhone’s wallet feature. Express mode has been designed with commuters in mind, when they may want to quickly tap and pay at a turnstile to access rail, for example, rather than hold up a line due to the need to go through further identity authentication.  The researchers say that the issue, which only applies to Apple Pay and Visa, is caused by the use of a unique code — nicknamed “magic bytes” — that is broadcast by transit gates and turnstiles to unlock Apple Pay.  By using standard radio equipment, they were able to perform a relay attack, “fooling an iPhone into thinking it was talking to a transit gate,” according to the team.An experiment was conducted using an iPhone with a Visa transit card set up, a Proxmark — to act as a reader emulator — an NFC-enabled Android phone, which acted as a card emulator, and a payment terminal: the overall aim being to make a payment on a locked device to an EMV (smart payment) reader.

    If an intended victim is in close proximity, whether held by someone or stolen, the attack can be triggered by capturing and then broadcasting the “magic bytes” and then modifying a set of other variables, as explained below: “While relaying the EMV messages, the Terminal Transaction Qualifiers (TTQ), sent by the EMV terminal, need to be modified such that the bits (flags) for Offline Data Authentication (ODA) for Online Authorizations supported and EMV mode supported are set.  Offline data authentication for online transactions is a feature used in special-purpose readers, such as transit system entry gates, where EMV readers may have intermittent connectivity and online processing of a transaction cannot always take place. These modifications are sufficient to allow relaying a transaction to a non-transport EMV reader, if the transaction is under the contactless limit.” The attack has been demonstrated in the video below. The experiment was performed with an iPhone 7 and an iPhone 12. Transactions over the contactless limit may also potentially be modified, but this requires additional value changes. 

    The experiment is an interesting one, although in the real world, this attack technique may not be feasible on a wider scale. It should also be noted that authorization protocols are only one layer of payment protection, and financial institutions often implement additional systems to detect suspicious transactions and mobile fraud. The overall fraud level on Visa’s global network is recorded as below 0.1%.Speaking to ZDNet, the researchers said that Apple was first contacted on October 23, 2020. The team then reached out to Visa in January, followed by a video call in February, and then a report was submitted to Visa’s vulnerability reporting platform on May 10, 2021. The academics say that while acknowledged by both parties, who have been spoken to “extensively,” the issue remains unfixed.”Our work shows a clear example of a feature, meant to incrementally make life easier, backfiring and negatively impacting security, with potentially serious financial consequences for users,” Radu commented. “Our discussions with Apple and Visa revealed that when two industry parties each have partial blame, neither are willing to accept responsibility and implement a fix, leaving users vulnerable indefinitely.” In a statement, Visa told us:”Visa cards connected to Apple Pay Express Transit are secure and cardholders should continue to use them with confidence. Variations of contactless fraud schemes have been studied in laboratory settings for more than a decade and have proven to be impractical to execute at scale in the real world. Visa takes all security threats very seriously, and we work tirelessly to strengthen payment security across the ecosystem.”The research was conducted as part of the TimeTrust trusted computing project and was funded by the UK National Cyber Security Centre (NCSC). Update 7.43 BST: Apple told ZDNet:”We take any threat to users’ security very seriously. This is a concern with a Visa system but Visa does not believe this kind of fraud is likely to take place in the real world given the multiple layers of security in place. In the unlikely event that an unauthorized payment does occur, Visa has made it clear that their cardholders are protected by Visa’s zero liability policy.”Seperately, DinoSec has compiled a log of lock screen bypass issues impacting Apple iOS since 2011.  Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    96% of third-party container applications deployed in cloud infrastructure contain known vulnerabilities: Unit 42

    A new report from Palo Alto Networks’ Unit 42 outlined the ways that the supply chain has become an emerging cloud security threat. Unit 42 conducted a red team exercise with a large SaaS provider that is a Palo Alto Networks customer and within three days, the team was able to discover critical software development flaws that could have exposed the organization to an attack similar to SolarWinds and Kaseya. Unit 42 found that 63% of third-party code used in building cloud infrastructure contained insecure configurations. If an attacker compromises third-party developers, it’s possible to infiltrate thousands of organizations’ cloud infrastructures, according to the report.The organization analyzed data from a variety of public data sources around the world in order to draw conclusions about the growing threats organizations face today in their software supply chains. They found that 96% of third-party container applications deployed in cloud infrastructure contain known vulnerabilities. In the report, Unit 42 researchers discovered that even for a customer who had what most would consider a “mature” cloud security posture, there were several critical misconfigurations and vulnerabilities that allowed the Unit 42 team to take over the customer’s cloud infrastructure in a matter of days.”In most supply chain attacks, an attacker compromises a vendor and inserts malicious code in software used by customers. Cloud infrastructure can fall prey to a similar approach in which unvetted third-party code could introduce security flaws and give attackers access to sensitive data in the cloud environment. Additionally, unless organizations verify sources, third-party code can come from anyone, including an Advanced Persistent Threat,” Unit 42 wrote. 

    “Teams continue to neglect DevOps security, due in part to lack of attention to supply chain threats. Cloud native applications have a long chain of dependencies, and those dependencies have dependences of their own. DevOps and security teams need to gain visibility into the bill of materials in every cloud workload in order to evaluate risk at every stage of the dependency chain and establish guardrails.”
    Unit 42
    BreachQuest CTO Jake Williams called the research “significant” and said it replaces anecdotes of incident responders with actual data on how common it is to find configuration issues and unpatched vulnerabilities in the public software supply chain. “At BreachQuest, we are used to working incidents where code and apps are built from Docker Hub images with pre-built security issues. While these are usually missing patches, it’s not uncommon to find security misconfigurations in these images either,” Williams said. “This is a problem the security community has dealt with since the dawn of the public cloud. Previous research found that the vast majority of publicly available Amazon Machine Images contained missing patches and/or configuration issues.”Other experts, like Valtix CTO Vishal Jain, noted that for more than a year now, spend on the cloud vastly exceeded spend on data centers. Jain added that attacks typically go where the money is, so the big, open security front for enterprises is now the cloud. He suggested organizations focus on security at build time — scanning of IaC templates used in building cloud infrastructure — and security at run time. “It is not either/or, it needs to be both. More importantly, with dynamic infrastructure and app sprawl in the public cloud, there is a new set of security problems that need to be addressed in the cloud,” Jain said. Others said code was almost impossible to secure against fast-moving functional requirements and threat models. Mohit Tiwari, CEO at Symmetry Systems, told ZDNet it is more efficient to harden the infrastructure than chase application-level bugs in hundreds of millions of lines of code. Tiwari explained that first-party code is as likely as third-party code to have exploitable bugs — like authorization errors — and these bugs expose customer data that is managed by business logic. “Blaming third party code is a red-herring — software like Linux, Postgres, Django/Rails etc…comprise most of any applications, so nearly 100% of applications have third party code with known vulnerabilities,” Tiwari said. “Organizations in practice are instead moving to get infrastructure — cloud IAM, service meshes, etc… — in order while relying on code-analysis for targeted use cases (such as the trusted code base that provides security for the bulk of application code).” More

  • in

    Microsoft announces multi-year partnership with cyber insurance firm At-Bay

    Microsoft unveiled a new partnership with cyber insurance company At-Bay on Wednesday, announcing that it was seeking to help the insurance industry “create superior and data-driven cyber insurance products backed by Microsoft’s security solutions.”

    ZDNet Recommends

    The best cyber insurance

    The cyber insurance industry is likely to go mainstream and is a simple cost of doing business. Here are a few options to consider.

    Read More

    At-Bay claimed their insureds are seven times less likely to experience a ransomware incident than the industry average and noted that they provide insights to their customers about ways they can better protect themselves. Starting on October 1, companies in the US that are already Microsoft 365 customers will be eligible “for savings on their At-Bay cyber insurance policy premiums if they implement specific security controls and solutions, including multi-factor authentication and Microsoft Defender for Office 365.”Ann Johnson, Microsoft’s corporate vice president of security, compliance & identity business development, explained that for cyber insurance to play a meaningful role in overall risk management, buyers and sellers need the benefit of data and clear visibility into what is covered and factors either minimizing or multiplying risk exposure. “Microsoft’s partnership with At-Bay brings important clarity and decision-making support to the market as organizations everywhere seek a comprehensive way to empower hybrid workforces with stronger, centralized visibility and control over cloud applications boosting security and productivity,” Johnson said. The company said in a statement that At-Bay’s portfolio companies have had their cybersecurity strengthened by certain incentives they provide, including improved policy terms and pricing.Microsoft said it will work with At-Bay to find other ways customers can limit their risk exposure and proactively address vulnerabilities.

    Microsoft noted that it is working with other insurers to protect their customers and reduce the risk of loss, which has grown significantly over the last few years, causing steep increases in premiums. “Insurance carriers, agents, reinsurers and brokers are required to understand and assess cybersecurity threats for each of their insureds. With this complexity, insurers are seeking increased visibility into each company’s security environment and hygiene to better underwrite new policies,” Microsoft said in a statement. “To address this, Microsoft is teaming with key insurance partners to offer innovative data-driven cyber insurance products allowing customers to safely share security posture information through platforms like Microsoft 365 and Microsoft security solutions.mAll data and details about a covered company’s technology environment will be owned and controlled entirely by that customer, but customers can opt-in to securely share them with providers to receive benefits like enhanced coverage and more competitive premiums.” At-Bay CEO Rotem Iram said insurance policies are effective tools that help define the cost of certain cybersecurity choices of a company. “By offering better pricing to companies that implement stronger controls, we help them understand what matters in security and how best to reduce risk,” Iram said. “Working with Microsoft enables us to educate customers on the powerful security controls that exist within Microsoft 365 and reward them for adopting those controls.” More

  • in

    Report highlights cybersecurity dangers of Elastic Stack implementation mistakes

    A new report has identified significant vulnerabilities resulting from the mis-implementation of Elastic Stack, a group of open-source products that use APIs for critical data aggregation, search, and analytics capabilities.Researchers from cybersecurity firm Salt Security discovered issues that allowed them to not only launch attacks where any user could extract sensitive customer and system data but also allowed any user to create a denial of service condition that would render the system unavailable. The researchers said they first discovered the vulnerability while protecting one of their customers, a large online business-to-consumer platform that provides API-based mobile applications and software as a service to millions of global users.Once they discovered the vulnerability, they checked other customers using Elastic Stack and found that almost every enterprise with it was affected by the vulnerability — which exposed users to injection attacks and more. Salt Security officials were quick to note that this is not a vulnerability with Elastic Stack itself but instead a problem with how it is being implemented. Salt Security technical evangelist Michael Isbitski said the vulnerability is not connected to any issue with Elastic’s software but is related to “a common risky implementation setup by users.”He noted that Elastic provides guidance about how to implement Elastic Stack instances securely but noted that the responsibility falls on practitioners to make use of the guidance. “The lack of awareness around potential misconfigurations, mis-implementations, and cluster exposures is largely a community issue that can be solved only through research and education,” Isbitski told ZDNet. 

    “Elastic Stack is far from the only example of this type of implementation issue, but the company can help educate its users just as Salt Security has been working with CISOs, security architects, and other application security practitioners to alert them to this and other API vulnerabilities and provide mitigation best practices.”The vulnerability would allow a threat actor to abuse the lack of authorization between front-end and back-end services as a way to get a working user account with basic permission levels. From there, a cyberattacker could then exfiltrate sensitive user and system data by making “educated guesses about the schema of back-end data stores and query for data they aren’t authorized to access,” according to the report. Salt Security CEO Roey Eliyahu said that while Elastic Stack is widely used and secure, the same architectural design mistakes were seen in almost every environment that uses it.”The Elastic Stack API vulnerability can lead to the exposure of sensitive data that can be used to perpetuate serious fraud and abuse, creating substantial business risk,” Eliyahu said. Exploits that take advantage of this Elastic Stack vulnerability can create “a cascade of API threats,” according to Salt Security researchers, who also showed that the Elastic Stack design implementation flaws worsen significantly when an attacker chains together multiple exploits.The problem has been something security researchers have long highlighted with a number of similar products like MongoDB and HDFS.”The specific queries submitted to the Elastic back-end services used to exploit this vulnerability are difficult to test for. This case shows why architecture matters for any API security solution you put in place — you need the ability to capture substantial context about API usage over time,” Isbitski said.”It also shows how critical it is to architect application environments correctly. Every organization should evaluate the API integrations between its systems and applications, since they directly impact the company’s security posture.”Researchers from the company said they were able to gain access to sensitive data like account numbers, transaction confirmation numbers and other information that would violate GDPR regulations. The report details other actions that could be taken through the vulnerability including the ability to perpetrate a variety of fraudulent activities, extort funds, steal identities and take over accounts. Jon Gaines, senior application security consultant at nVisium, said the Elastic Stack is “notorious for excessive data exposure” and added that a few years ago — and by default — data was exposed publicly. Since then the defaults have changed but he noted that this doesn’t mean that older versions aren’t grandfathered in or that minor configuration changes can’t lead to both of these newly unearthed vulnerabilities. “There are — and have been — multiple open source tools that lead to the discovery of these vulnerabilities that I’ve used previously and continue to use. Unfortunately, the technical barrier of these vulnerabilities is extremely low. As a result, the risk of a bad guy discovering and exploiting these vulnerabilities is high,” Gaines said. “From the outside looking in, these vulnerabilities are common sense for security professionals, authorization, rate limitations, invalidation, parameterized queries, and so forth. However, as a data custodian, administrator, or even developer, oftentimes you aren’t taught to develop or maintain with security in mind.”Vulcan Cyber CEO Yaniv Bar-Dayan added that the most-common cloud vulnerability is caused by human error and misconfigurations, and APIs are not immune.”We’ve all seen exposed customer data and denial of service attacks do significant material damage to hacked targets. Exploitation of this vulnerability is avoidable but must be remediated quickly,” Bar-Dayan said. “Other users of Elastic Stack should check their own implementations for this misconfiguration and not repeat the same mistake.” More

  • in

    Dell adds new security features and automation to ProSupport Suite

    Dell has added new features to its ProSupport Suite for PCs that offer users new endpoint security offerings and enhance their line of commercial PCs. 

    The ProSupport Suite for PCs allows IT teams to customize and automate how they manage employee devices, which has become increasingly important as companies continue to invest heavily in remote work.Dell’s updates include new catalog management and deployment capabilities while also giving IT managers the ability to update Dell BIOS, drivers, firmware and applications automatically and remotely. IT teams can also customize how the updates are grouped. The new tools also provide IT teams with a centralized platform to see their entire Dell PC fleet and monitor each device’s health, application experience, and security scores. Dell will also be offering a AI-powered services support software to provide suggestions based on performance trends. The new ProSupport Suite for PCs capabilities will be available to customers by October 19, and the Advanced Secure Component Verification is available now for US customers. The Intel ME Verification and Dell Trusted Device SIEM Integration is also available to all customers in North America, Europe and the Asia-Pacific-Japan region. Doug Schmitt, president of Services at Dell Technologies, said the company prioritised the updates because IT operations have become significantly more complicated, especially with the amount of data and opportunities at the edge. “Our approach to IT services is built on an AI-driven, adaptive, always-on foundation, taking today’s realities and future customer needs into consideration,” Schmitt said. 

    “At the end of the day, the new capabilities are about helping IT leaders see ahead and stay ahead while providing workforces around the world the ability to continue collaborating and innovating without disruption.”The company also unveiled the Dell Trusted Devices security portfolio to protect commercial PCs throughout the entire supply chain and device lifecycle. “This comprehensive suite of above- and below- the operating system (OS) security solutions leverage intelligence and help empower businesses to prevent, detect and respond to threats with improved mean-time-to-detect (MTTD) and mean-time-to-resolution (MTTR) of issues,” Dell explained. Dell is adding Advanced Secure Component Verification for PCs that helps customers make sure Dell PCs and key components arrive as they were ordered and built. The Intel Management Engine Verification checks critical system firmware and looks for evidence of tampering, targeting boot processes. IT teams will also have more critical visibility below the OS security events in dashboards offered through the new Dell Trusted Device Security Information and Event Management Integration. More

  • in

    Tomiris backdoor discovery linked to Sunshuttle, DarkHalo hackers

    Researchers have uncovered a new connection between Tomiris and the APT behind the SolarWinds breach, DarkHalo. 

    On Wednesday at the Kaspersky Security Analyst Summit (SAS), researchers said that a new campaign revealed similarities between DarkHalo’s Sunshuttle, as well as “target overlaps” with Kazuar.  The SolarWinds incident took place in 2020. FireEye and Microsoft revealed the breach, in which SolarWinds’s Orion network management software was compromised to impact as many as 18,000 customers in a software update-based supply-chain attack.  While many thousands of clients may have received a malicious update, the threat actors appeared to cherry-pick the targets worthy of further compromise — including Microsoft, FireEye, and government agencies.  Microsoft president Brad Smith dubbed the incident as “the largest and most sophisticated attack the world has ever seen.”
    Kaspersky
    Eventually, the finger was pointed at the advanced persistence threat (APT) group DarkHalo/Nobelium as the party responsible, which managed to deploy the Sunburst/Solorigate backdoor, Sunspot build server monitoring software, and Teardrop/Raindrop dropper, designed to deploy a Cobalt Strike beacon, on target systems.   The Russian, state-backed group’s campaign was tracked as UNC2452, which has also been linked to the Sunshuttle/GoldMax backdoor. 

    In June, after roughly six months of inactivity from DarkHalo, Kaspersky uncovered a DNS hijacking campaign against multiple government agencies in an unnamed CIS member state. “These hijacks were for the most part relatively brief and appear to have primarily targeted the mail servers of the affected organizations,” Kaspersky commented. “We do not know how the threat actor was able to achieve this, but we assume they somehow obtained credentials to the control panel of the registrar used by the victims.”The researchers say that the campaign operators redirected victims attempting to access an email service to a fake domain which then prompted them into downloading a malicious software update, made possible by switching legitimate DNS servers for compromised zones to attacker-controlled resolvers. This update contained the Tomiris backdoor.  “Further analysis showed that the main purpose of the backdoor was to establish a foothold in the attacked system and to download other malicious components,” Kaspersky added. “The latter, unfortunately, were not identified during the investigation.” Tomiris, however, did prove to be an interesting discovery. The backdoor is described as “suspiciously similar” to Sunshuttle. Both backdoors are written in the Golang (Go) programming language, the same English language spelling mistakes were in the payloads’ code, and each uses similar encryption and obfuscation setups for configuration and network traffic management purposes.  In addition, both Tomiris and Sunshuttle use scheduled tasks for persistence as well as sleep-based delay mechanisms. The team believes the “general workflow of the two programs” hints at the same development practices.  However, the backdoor has little function beyond the capability to download additional malware, which suggests Tomiris is likely part of a wider operator toolkit.It should also be noted that Tomiris has been found in environments also infected with the Kazuar backdoor, malware that Kaspersky has tentatively linked to Sunburst — while Palo Alto has also connected Kazuar and the Turla APT. Cisco Talos has also recently uncovered a new, simple backdoor now deployed by the Turla APT on victim systems.  Kaspersky also acknowledges this may be a case of a ‘false flag’ designed to mislead researchers and send them down the wrong analysis or attribution paths. Pierre Delcher, senior security researcher at Kaspersky, commented: “None of these items, taken individually, is enough to link Tomiris and Sunshuttle with sufficient confidence. We freely admit that a number of these data points could be accidental, but still feel that taken together they at least suggest the possibility of common authorship or shared development practices.” Previous and related coverage:Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More