More stories

  • in

    Australia's digital vaccination certificates for travel ready in two to three weeks

    Image: Getty Images
    Services Australia CEO Rebecca Skinner on Thursday said that Australia’s digital vaccination certificates for international travel would be ready in two to three weeks. Skinner, who appeared before Australia’s COVID-19 Select Committee, provided the update when explaining how the upcoming visible digital seal (VDS) would operate. The VDS is Australia’s answer for indicating a person’s COVID-19 vaccination status for international travel; it will link a person’s vaccination status with new digital vaccination certificates and border declarations.Skinner said her agency was working to make the VDS accessible to fully vaccinated people through the MedicareExpress Plus app. To access the VDS through the MedicareExpress Plus app, Skinner said users would need to provide additional passport details along with the consent to share their immunisation history with the Australian Passport Office. The data would then be sent to the Passport Office to determine whether the user is eligible to receive a VDS.The approval process performed by the passport office will be automated, Service Australia Health Programmes general manager Jarrod Howard said, and would entail the Passport Office checking whether the person is fully vaccinated.Due to the process being automated, Howard said people could re-apply for a VDS “in a matter of seconds” at the airport in the event there is an error with a VDS.Once approved, Howard said the VDS would be available on the Medicare Express Plus App and allow for verification on third-party apps.

    Providing a timeline for when VDS would be ready, Skinner told the committee she expected it to be ready in the next two to three weeks, or before the end of October at the latest. While noting the digital vaccination certificate for international travel was coming soon, she adamantly refused to call the VDS a vaccine passport as an official passport is still required. Outgoing travellers from Australia will not be allowed to travel abroad without the VDS or another authorised digital vaccination certificate, however, even if they have a passport. Australia’s Trade Minister Dan Tehan earlier this month said the VDS system has already been sent to all of Australia’s overseas embassies in order to begin engagement with overseas posts and overseas countries regarding international travel.The Department of Foreign Affairs and Trade, meanwhile, has already put out a verification app, called the VDS-NC Checker, onto Apple’s App Store, which the department hopes will be used at airports to check people onto flights.International travel for fully vaccinated people living in Australia is currently expected by Christmas, with Tehan confirming that the official date would be when 80% of the country is fully vaccinated.Digital vaccination certificate for state check-in apps to undergo trialOn the domestic front, fully vaccinated Australians may soon be able to add digital vaccination certificates to state-based check-in apps, Skinner said. She said there would eventually be an additional feature on the MedicareExpress App that allows users to add their COVID-19 immunisation history to state-based check-in apps.The process for adding the digital vaccination certificate to state-based check-in apps will be similar to accessing the VDS, except users will not need to provide their passport details.Consent must first be provided for the data to be added to state-based apps, Skinner said.The consent provided by users will last for 12 months, with users needing to provide consent again in order for the immunisation information to continue to appear on the state-based apps.  Services Australia envisions this process occurring through a security token being passed to the relevant state authority once consent is provided. The security token will have data showing a person’s COVID-19 immunisation history and other information such as an individual health identifier.That data would be stored in the Australian Immunisation Register (AIR) database, which is maintained by Services Australia on behalf of the Department of Health.Currently, those fully vaccinated can only add their digital vaccination certificate to Apple Wallet or Google Pay. Those not eligible for Medicare who are fully vaccinated, meanwhile, can call the Australian Immunisation Register for a hard copy, or use the Individual Healthcare Identifiers service through myGov for a digital version.Trials to implement the COVID-19 digital certificate on state-based apps will start in New South Wales next week. Of Australia’s states and territories, only New South Wales has officially signed up to trial the new feature so far, however.”Our approach has been particularly for high volume venues to reduce friction on both staff in those venues and also friction for customers to leverage the current check-in apps that all of the jurisdictions currently have,” Services Australia Deputy CEO of Transformation Projects Charles McHardie said.When asked why Services Australia was not focusing on introducing the digital vaccination certificate through a national app, like COVIDSafe, McHardie explained that this was due to Australia’s public health orders being issued at a state level.McHardie conceded, however, that incoming travellers could potentially be required to install up to eight different apps to adhere to Australia’s various state check-in protocols.  Howard added that check-in apps from certain states and territories — ACT, Northern Territory, Queensland, and Tasmania — had interoperability with each other due to these apps using the same background technology.  According to DTA acting-CEO Peter Alexander, who also appeared before the committee, the bungled COVIDSafe app has cost AU$9.1 million as of last week. New South Wales and Victoria have been the only states to use information from the app. The AU$9.1 million figure is in line with the January update that the COVIDSafe app costs around AU$100,000 per month to run. At the end of January, total spend for the app was AU$6.7 million.   At the time of writing, around 11 million people living in Australia are fully vaccinated. Of those people, 6.3 million have downloaded a digital vaccination certificate.RELATED COVERAGE More

  • in

    YouTube expands medical misinformation bans to include all anti-vaxxer content

    Image: Getty Images
    YouTube has said it will remove content containing misinformation or disinformation on approved vaccines, as that content poses a “serious risk of egregious harm”. “Specifically, content that falsely alleges that approved vaccines are dangerous and cause chronic health effects, claims that vaccines do not reduce transmission or contraction of disease, or contains misinformation on the substances contained in vaccines will be removed,” the platform said in a blog post. “This would include content that falsely says that approved vaccines cause autism, cancer or infertility, or that substances in vaccines can track those who receive them. Our policies not only cover specific routine immunizations like for measles or Hepatitis B, but also apply to general statements about vaccines.” Exceptions to the rules do exist: Videos that discuss vaccine policies, new trials, historical success, and personal testimonials will be allowed, provided other rules are not violated, or the channel is not deemed to promote vaccine hesitancy. “YouTube may allow content that violates the misinformation policies … if that content includes additional context in the video, audio, title, or description. This is not a free pass to promote misinformation,” YouTube said. “Additional context may include countervailing views from local health authorities or medical experts. We may also make exceptions if the purpose of the content is to condemn, dispute, or satirise misinformation that violates our policies.” If a channel violates the policy three times in 90 days, YouTube said it will remove the channel.

    The channel of one anti-vaccine pushing non-profit, the Children’s Health Defense that is chaired by Robert F. Kennedy Jr, was removed. Kennedy claimed the channel’s removal as a free speech issue. Meanwhile, the BBC reported that Russia threatened to ban YouTube after a pair of RT channels in German were banned for COVID misinformation. YouTube said when announcing its expanded policy, it has removed over 130,000 videos for violating its COVID-19 vaccine policies since last year. In August, the video platform said it removed over 1 million COVID-19 misinformation videos. Earlier this year, Twitter began automatically labelling tweets it regarded as having misleading information about COVID-19 and its vaccines, as well as introducing its own strike system that includes temporary account locks and can lead to permanent suspension. While the system has led to the repeated suspension of misinformation peddlers such as US congresswoman Marjorie Taylor Greene, the automated system cannot handle sarcasm from users attempting humour on the topics of COVID-19 and 5G. In April, the Australian Department of Health published a page attempting to dispel any link between vaccines and internet connectivity. “COVID-19 vaccines do not — and cannot — connect you to the internet,” it stated. “Some people believe that hydrogels are needed for electronic implants, which can connect to the internet. The Pfizer mRNA vaccine does not use hydrogels as a component.” Related Coverage More

  • in

    Every country must decide own definition of acceptable AI use

    Every country including Singapore will need to decide what it deems to be acceptable uses of artificial intelligence (AI), including whether the use of facial recognition technology in public spaces should be accepted or outlawed. Discussions should seek to balance market opportunities and ensuring ethical use of AI, so such guidelines are usable and easily adopted. Above all, governments should seek to drive public debate and gather feedback so AI regulations would be relevant for their local population, said Ieva Martinkenaite, head of analytics and AI for Telenor Research. The Norwegian telecommunications company applies AI and machine learning models to deliver more personalised customer and targeted sales campaigns, achieve better operational efficiencies, and optimise its network resources. For instance, the technology helps identify customer usage patterns in different locations and this data is tapped to reduce or power off antennas where usage is low. This not only helps lower energy consumption and, hence, power bills, but also enhances environmental sustainability, Martinkenaite said in an interview with ZDNet.The Telenor executive also chairs the AI task force at GSMA-European Telecommunications Network Operators’ Association, which drafts AI regulation for the industry in Europe, transitioning ethics guidelines into legal requirements. She also provides input on the Norwegian government’s position on proposed EU regulatory acts.

    Singapore wants widespread AI use in smart nation drive

    With the launch of its national artificial intelligence (AI) strategy, alongside a slew of initiatives, the Singapore government aims to fuel AI adoption to generate economic value and provide a global platform on which to develop and testbed AI applications.

    Read More

    Asked what lessons she could offer Singapore, which last October released guidelines on the development of AI ethics, Martinkenaite pointed to the need for regulators to be practical and understand the business impact of legislations.Frameworks on AI ethics and governance might look good on paper, but there also should be efforts to ensure these were usable in terms of adoption, she said. This underscored the need for constant dialogue and feedback as well as continuous improvement, so any regulations remained relevant.For one, such guidelines should be provided alongside AI strategies, including the types of business and operating models the country should pursue and highlights of industries that could best benefit from its deployment. 

    In this aspect, she said EU and Singapore had identified strategic industries they believed the use of data and AI could scale. These sectors also should be globally competitive and where the country’s largest investments should go.  Singapore in 2019 unveiled its national AI strategy to identify and allocate resources to key focus areas, as well as pave the way for the country, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. These included manufacturing, finance, and government. In driving meaningful adoption of AI, nations should strive to look for “balance” between tapping market opportunities and ensuring ethical use of the technology.Noting that technology was constantly evolving, she said it also was not possible for regulations to always keep up. In drafting the region’s AI regulations, EU legislators also grappled with several challenges including how laws governing the ethical use of AI could be introduced without impacting the flow of talent and innovation, she explained. This proved a significant obstacle as there were worries regulations could result in excessive red tape, with which companies could find difficult to comply.There also were concerns about increasing dependence on IT infrastructures and machine learning frameworks that were developed by a handful of internet giants, including Amazon, Google, and Microsoft as well as others in China, Martinkenaite said.She cited disconcert amongst EU policy makers on how the region could maintain its sovereignty and independence amidst this emerging landscape. Specifically, discussions revolved around the need to create key enabling technologies in AI within the region, such as data, compute power, storage, and machine learning architectures. With this focus on building greater AI technology independency, it then was critical for EU governments to create incentives and drive investments locally in the ecosystem, she noted. Rising concerns about the responsible use of AI also were driving much of the discussions in the region, as they were in other regions such as Asia, she added. While there still was uncertainty over what the best principles were, she stressed the need for nations to participate in the discussions and efforts in establishing ethical principles of AI. This would bring the global industry together to agree on these principles should be and to adhere to them. United Nation’s human rights chief Michelle Bachelet recently called for the use of AI to be outlawed if they breached international human rights law. She underscored the urgency in assessing and addressing the risks AI could bring to human rights, noting that stricter legislation on its use should be implemented where it posed higher risks to human rights. Bachelet said: “AI can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”The UN report urged governments to take stronger action to keep algorithms under control. Specifically, it recommended a moratorium on the use of biometric technologies, including facial recognition, in public spaces until, at least, authorities were able to demonstrate there were no significant issues with accuracy or discriminatory impacts. These AI systems, which increasingly were used to identify people in real-time and from a distance and potentially enabled unlimited tracking of individuals, also should comply with privacy and data protection standards, the report noted. It added that more human rights guidance on the use of biometrics was “urgently needed”. Finding balance between ethics, business of AITo better drive adoption of ethical AI, Martinkenaite said such guidelines should be provided alongside AI strategies, including the types of business and operating models the country should pursue and highlights of industries that could best benefit from its deployment. In this aspect, she said EU and Singapore had identified strategic industries they believed the use of data and AI could scale. These sectors also should be globally competitive and where the country’s largest investments should go.  Singapore in 2019 unveiled its national AI strategy to identify and allocate resources to key focus areas, as well as pave the way for the country, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. These included manufacturing, finance, and government. In driving meaningful adoption of AI, nations should strive to look for “balance” between tapping market opportunities and ensuring ethical use of the technology.Martinkenaite noted that governments and regulators worldwide would have to determine what it meant to be ethical in their country’s use of AI and ability to track data as its application was non-discriminatory. This would be pertinent especially in discussions on the risk AI could introduce in certain areas, such as facial recognition. While any technology in itself was not necessarily bad, its use could be deemed to be so, she said. AI, for instance, could be used to benefit society in detecting criminals or preventing accidents and crimes. There were, however, challenges in such usage amidst evidence of discriminatory results, including against certain races, economic classes, and gender. These could pose high security or political risks. Martinkenaite noted that every country and government then needed to decide what were acceptable and preferred ways for AI to be applied by its citizens. These included questions on whether the use of AI-powered biometrics recognition technology on videos and images of people’s faces for law enforcement purposes should be accepted or outlawed. She pointed to ongoing debate in EU, for example, on whether the use of AI-powered facial recognition technology in public places should be completely banned or used only with exceptions, such as in preventing or fighting crime. The opinions of its citizens also should be weighed on such issues, she said, adding that there were no wrong or right decisions here. These were simply decisions countries would have to decide for themselves, she said, including multi-ethnic nations such as Singapore. “It’s a dialogue every country needs to have,” she said.Martinkenaite noted, though, that until veracity issues related to the analysis of varying skin colours and facial features were properly resolved, the use of such AI technology should not be deployed without any human intervention, proper governance or quality assurance in place.She urged the need for continual investment in machine learning research and skillsets, so the technology could be better and become more robust. She noted that adopting an ethical AI strategy also could present opportunities for businesses, since consumers would want to purchase products and services that were safe and secured and from organisations that took adequate care of their personal data. Companies that understood such needs and invested in the necessary talent and resources to build a sustainable AI environment would be market differentiators, she added. A FICO report released in May revealed that nearly 70% of 100 AI-focused leaders in the financial services industry could not explain how specific AI model decisions or predictions were made. Some 78% said they were “poorly equipped to ensure the ethical implications of using new AI systems”, while 35% said their organisation made efforts to use AI in a transparent and accountable way. Almost 80% said they faced difficulty getting their fellow senior executives to consider or prioritise ethical AI usage practices, and 65% said their organisation had “ineffective” processes in place to ensure AI projects complied with any regulations.  RELATED COVERAGE More

  • in

    US and EU to cooperate on tech standards, supply chain security and tech development

    Image: Getty Images
    The United States and the European Union have started work on coordinating approaches across various technology areas, including AI and semiconductors, and tackling non-market policies that result in the misuse of technology.The plan was created on Wednesday after US and EU representatives, including US President Joe Biden and European Commission Vice Presidents Valdis Dombrovskis and Margrethe Vestager, met for the first time as part of the new US-EU Trade and Technology Council (TTC).The US-EU Trade and Technology Council launched in June as part of efforts to ensure sensitive technologies are not misused and cyber attacks can be prevented. At the time, the council agreed to create 10 working groups focused on addressing various technological and trade issues.”Future conflicts will be fought very differently. The fight over tech will be the new battleground of geopolitics. Security also means that we need to keep an eye on what we export and who is investing in our economies. And what they are investing in. Here, our aim is to strive for convergent export control approaches on sensitive dual-use technologies.,” Dombrovskis said prior to the inaugural TTC meeting.After the meeting, the EU and US said in a joint statement the council would look to address the misuse of technology and protect societies from information manipulation and interference.In the joint statement, the council also provided more details on what the 10 working groups will do. Among the working groups, there are ones that will focus on developing technology standards, advancing supply chain security, developing finance for secure and resilient digital connectivity in third-world countries, data governance, combatting arbitrary or unlawful surveillance, export controls, investment screening with a focus on sensitive technologies and related sensitive data, and promoting access of digital tools for small and medium-sized businesses.While China was not mentioned as being part of the council’s meeting agenda, one of the working groups created by the TTC will specifically focus on addressing challenges from non-market economic policies and practices that distort trade. The council listed examples of these non-market practices as including forced technology transfer, state-sponsored theft of intellectual property, market-distorting industrial subsidies, and discriminatory treatment of foreign companies.

    “We intend to cooperate on the development and deployment of new technologies in ways that reinforce our shared democratic values, including respect for universal human rights, advance our respective efforts to address the climate change crisis, and encourage compatible standards and regulations,” the EU and US said in a joint statement.Geopolitical movements, specifically around trade and technology, have been on the rise. The Quad earlier this week announced various non-military technology initiatives aimed at establishing global cooperation on critical and emerging technologies, such as AI, 5G, and semiconductors.Australia, the US, and the UK also recently established the AUKUS security pact, which is aimed at addressing defence and security concerns posed by China within the Indo-Pacific region through defense-related science and technological means. AUKUS’ first initiative is helping Australia acquire nuclear-powered submarines.Like the TTC, both the Quad and AUKUS took indirect swipes at China when announcing their respective sets of new initiatives.Meanwhile, China has formally applied to join the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), one of the world’s largest trade pacts. Taiwan, which has similarly applied to join the CPTPP, has accused China of only making the application to block Taiwan from entering international trade blocs.RELATED COVERAGE More

  • in

    Researchers discover bypass 'bug' in iPhone Apple Pay, Visa to make contactless payments

    UK academics have uncovered mobile security issues in Visa and Apple payment mechanisms that could result in fraudulent contactless payments.

    On Thursday, academics from the UK’s University of Birmingham and University of Surrey revealed the technique, in which attackers could bypass an Apple iPhone’s lock screen to access payment services and make contactless transactions.  A paper on the research, “Practical EMV Relay Protection,” (.PDF) is due to be published at the 2022 IEEE Symposium on Security and Privacy, and has been authored by Andreea-Ina Radu, Tom Chothia, Christopher J.P. Newton, Ioana Boureanu, and Liqun Chen. According to the paper, the ‘vulnerability’ occurs when Visa cards are set up in Express Transit mode in an iPhone’s wallet feature. Express mode has been designed with commuters in mind, when they may want to quickly tap and pay at a turnstile to access rail, for example, rather than hold up a line due to the need to go through further identity authentication.  The researchers say that the issue, which only applies to Apple Pay and Visa, is caused by the use of a unique code — nicknamed “magic bytes” — that is broadcast by transit gates and turnstiles to unlock Apple Pay.  By using standard radio equipment, they were able to perform a relay attack, “fooling an iPhone into thinking it was talking to a transit gate,” according to the team.An experiment was conducted using an iPhone with a Visa transit card set up, a Proxmark — to act as a reader emulator — an NFC-enabled Android phone, which acted as a card emulator, and a payment terminal: the overall aim being to make a payment on a locked device to an EMV (smart payment) reader.

    If an intended victim is in close proximity, whether held by someone or stolen, the attack can be triggered by capturing and then broadcasting the “magic bytes” and then modifying a set of other variables, as explained below: “While relaying the EMV messages, the Terminal Transaction Qualifiers (TTQ), sent by the EMV terminal, need to be modified such that the bits (flags) for Offline Data Authentication (ODA) for Online Authorizations supported and EMV mode supported are set.  Offline data authentication for online transactions is a feature used in special-purpose readers, such as transit system entry gates, where EMV readers may have intermittent connectivity and online processing of a transaction cannot always take place. These modifications are sufficient to allow relaying a transaction to a non-transport EMV reader, if the transaction is under the contactless limit.” The attack has been demonstrated in the video below. The experiment was performed with an iPhone 7 and an iPhone 12. Transactions over the contactless limit may also potentially be modified, but this requires additional value changes. 

    The experiment is an interesting one, although in the real world, this attack technique may not be feasible on a wider scale. It should also be noted that authorization protocols are only one layer of payment protection, and financial institutions often implement additional systems to detect suspicious transactions and mobile fraud. The overall fraud level on Visa’s global network is recorded as below 0.1%.Speaking to ZDNet, the researchers said that Apple was first contacted on October 23, 2020. The team then reached out to Visa in January, followed by a video call in February, and then a report was submitted to Visa’s vulnerability reporting platform on May 10, 2021. The academics say that while acknowledged by both parties, who have been spoken to “extensively,” the issue remains unfixed.”Our work shows a clear example of a feature, meant to incrementally make life easier, backfiring and negatively impacting security, with potentially serious financial consequences for users,” Radu commented. “Our discussions with Apple and Visa revealed that when two industry parties each have partial blame, neither are willing to accept responsibility and implement a fix, leaving users vulnerable indefinitely.” In a statement, Visa told us:”Visa cards connected to Apple Pay Express Transit are secure and cardholders should continue to use them with confidence. Variations of contactless fraud schemes have been studied in laboratory settings for more than a decade and have proven to be impractical to execute at scale in the real world. Visa takes all security threats very seriously, and we work tirelessly to strengthen payment security across the ecosystem.”The research was conducted as part of the TimeTrust trusted computing project and was funded by the UK National Cyber Security Centre (NCSC). Update 7.43 BST: Apple told ZDNet:”We take any threat to users’ security very seriously. This is a concern with a Visa system but Visa does not believe this kind of fraud is likely to take place in the real world given the multiple layers of security in place. In the unlikely event that an unauthorized payment does occur, Visa has made it clear that their cardholders are protected by Visa’s zero liability policy.”Seperately, DinoSec has compiled a log of lock screen bypass issues impacting Apple iOS since 2011.  Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    96% of third-party container applications deployed in cloud infrastructure contain known vulnerabilities: Unit 42

    A new report from Palo Alto Networks’ Unit 42 outlined the ways that the supply chain has become an emerging cloud security threat. Unit 42 conducted a red team exercise with a large SaaS provider that is a Palo Alto Networks customer and within three days, the team was able to discover critical software development flaws that could have exposed the organization to an attack similar to SolarWinds and Kaseya. Unit 42 found that 63% of third-party code used in building cloud infrastructure contained insecure configurations. If an attacker compromises third-party developers, it’s possible to infiltrate thousands of organizations’ cloud infrastructures, according to the report.The organization analyzed data from a variety of public data sources around the world in order to draw conclusions about the growing threats organizations face today in their software supply chains. They found that 96% of third-party container applications deployed in cloud infrastructure contain known vulnerabilities. In the report, Unit 42 researchers discovered that even for a customer who had what most would consider a “mature” cloud security posture, there were several critical misconfigurations and vulnerabilities that allowed the Unit 42 team to take over the customer’s cloud infrastructure in a matter of days.”In most supply chain attacks, an attacker compromises a vendor and inserts malicious code in software used by customers. Cloud infrastructure can fall prey to a similar approach in which unvetted third-party code could introduce security flaws and give attackers access to sensitive data in the cloud environment. Additionally, unless organizations verify sources, third-party code can come from anyone, including an Advanced Persistent Threat,” Unit 42 wrote. 

    “Teams continue to neglect DevOps security, due in part to lack of attention to supply chain threats. Cloud native applications have a long chain of dependencies, and those dependencies have dependences of their own. DevOps and security teams need to gain visibility into the bill of materials in every cloud workload in order to evaluate risk at every stage of the dependency chain and establish guardrails.”
    Unit 42
    BreachQuest CTO Jake Williams called the research “significant” and said it replaces anecdotes of incident responders with actual data on how common it is to find configuration issues and unpatched vulnerabilities in the public software supply chain. “At BreachQuest, we are used to working incidents where code and apps are built from Docker Hub images with pre-built security issues. While these are usually missing patches, it’s not uncommon to find security misconfigurations in these images either,” Williams said. “This is a problem the security community has dealt with since the dawn of the public cloud. Previous research found that the vast majority of publicly available Amazon Machine Images contained missing patches and/or configuration issues.”Other experts, like Valtix CTO Vishal Jain, noted that for more than a year now, spend on the cloud vastly exceeded spend on data centers. Jain added that attacks typically go where the money is, so the big, open security front for enterprises is now the cloud. He suggested organizations focus on security at build time — scanning of IaC templates used in building cloud infrastructure — and security at run time. “It is not either/or, it needs to be both. More importantly, with dynamic infrastructure and app sprawl in the public cloud, there is a new set of security problems that need to be addressed in the cloud,” Jain said. Others said code was almost impossible to secure against fast-moving functional requirements and threat models. Mohit Tiwari, CEO at Symmetry Systems, told ZDNet it is more efficient to harden the infrastructure than chase application-level bugs in hundreds of millions of lines of code. Tiwari explained that first-party code is as likely as third-party code to have exploitable bugs — like authorization errors — and these bugs expose customer data that is managed by business logic. “Blaming third party code is a red-herring — software like Linux, Postgres, Django/Rails etc…comprise most of any applications, so nearly 100% of applications have third party code with known vulnerabilities,” Tiwari said. “Organizations in practice are instead moving to get infrastructure — cloud IAM, service meshes, etc… — in order while relying on code-analysis for targeted use cases (such as the trusted code base that provides security for the bulk of application code).” More

  • in

    Microsoft announces multi-year partnership with cyber insurance firm At-Bay

    Microsoft unveiled a new partnership with cyber insurance company At-Bay on Wednesday, announcing that it was seeking to help the insurance industry “create superior and data-driven cyber insurance products backed by Microsoft’s security solutions.”

    ZDNet Recommends

    The best cyber insurance

    The cyber insurance industry is likely to go mainstream and is a simple cost of doing business. Here are a few options to consider.

    Read More

    At-Bay claimed their insureds are seven times less likely to experience a ransomware incident than the industry average and noted that they provide insights to their customers about ways they can better protect themselves. Starting on October 1, companies in the US that are already Microsoft 365 customers will be eligible “for savings on their At-Bay cyber insurance policy premiums if they implement specific security controls and solutions, including multi-factor authentication and Microsoft Defender for Office 365.”Ann Johnson, Microsoft’s corporate vice president of security, compliance & identity business development, explained that for cyber insurance to play a meaningful role in overall risk management, buyers and sellers need the benefit of data and clear visibility into what is covered and factors either minimizing or multiplying risk exposure. “Microsoft’s partnership with At-Bay brings important clarity and decision-making support to the market as organizations everywhere seek a comprehensive way to empower hybrid workforces with stronger, centralized visibility and control over cloud applications boosting security and productivity,” Johnson said. The company said in a statement that At-Bay’s portfolio companies have had their cybersecurity strengthened by certain incentives they provide, including improved policy terms and pricing.Microsoft said it will work with At-Bay to find other ways customers can limit their risk exposure and proactively address vulnerabilities.

    Microsoft noted that it is working with other insurers to protect their customers and reduce the risk of loss, which has grown significantly over the last few years, causing steep increases in premiums. “Insurance carriers, agents, reinsurers and brokers are required to understand and assess cybersecurity threats for each of their insureds. With this complexity, insurers are seeking increased visibility into each company’s security environment and hygiene to better underwrite new policies,” Microsoft said in a statement. “To address this, Microsoft is teaming with key insurance partners to offer innovative data-driven cyber insurance products allowing customers to safely share security posture information through platforms like Microsoft 365 and Microsoft security solutions.mAll data and details about a covered company’s technology environment will be owned and controlled entirely by that customer, but customers can opt-in to securely share them with providers to receive benefits like enhanced coverage and more competitive premiums.” At-Bay CEO Rotem Iram said insurance policies are effective tools that help define the cost of certain cybersecurity choices of a company. “By offering better pricing to companies that implement stronger controls, we help them understand what matters in security and how best to reduce risk,” Iram said. “Working with Microsoft enables us to educate customers on the powerful security controls that exist within Microsoft 365 and reward them for adopting those controls.” More

  • in

    Report highlights cybersecurity dangers of Elastic Stack implementation mistakes

    A new report has identified significant vulnerabilities resulting from the mis-implementation of Elastic Stack, a group of open-source products that use APIs for critical data aggregation, search, and analytics capabilities.Researchers from cybersecurity firm Salt Security discovered issues that allowed them to not only launch attacks where any user could extract sensitive customer and system data but also allowed any user to create a denial of service condition that would render the system unavailable. The researchers said they first discovered the vulnerability while protecting one of their customers, a large online business-to-consumer platform that provides API-based mobile applications and software as a service to millions of global users.Once they discovered the vulnerability, they checked other customers using Elastic Stack and found that almost every enterprise with it was affected by the vulnerability — which exposed users to injection attacks and more. Salt Security officials were quick to note that this is not a vulnerability with Elastic Stack itself but instead a problem with how it is being implemented. Salt Security technical evangelist Michael Isbitski said the vulnerability is not connected to any issue with Elastic’s software but is related to “a common risky implementation setup by users.”He noted that Elastic provides guidance about how to implement Elastic Stack instances securely but noted that the responsibility falls on practitioners to make use of the guidance. “The lack of awareness around potential misconfigurations, mis-implementations, and cluster exposures is largely a community issue that can be solved only through research and education,” Isbitski told ZDNet. 

    “Elastic Stack is far from the only example of this type of implementation issue, but the company can help educate its users just as Salt Security has been working with CISOs, security architects, and other application security practitioners to alert them to this and other API vulnerabilities and provide mitigation best practices.”The vulnerability would allow a threat actor to abuse the lack of authorization between front-end and back-end services as a way to get a working user account with basic permission levels. From there, a cyberattacker could then exfiltrate sensitive user and system data by making “educated guesses about the schema of back-end data stores and query for data they aren’t authorized to access,” according to the report. Salt Security CEO Roey Eliyahu said that while Elastic Stack is widely used and secure, the same architectural design mistakes were seen in almost every environment that uses it.”The Elastic Stack API vulnerability can lead to the exposure of sensitive data that can be used to perpetuate serious fraud and abuse, creating substantial business risk,” Eliyahu said. Exploits that take advantage of this Elastic Stack vulnerability can create “a cascade of API threats,” according to Salt Security researchers, who also showed that the Elastic Stack design implementation flaws worsen significantly when an attacker chains together multiple exploits.The problem has been something security researchers have long highlighted with a number of similar products like MongoDB and HDFS.”The specific queries submitted to the Elastic back-end services used to exploit this vulnerability are difficult to test for. This case shows why architecture matters for any API security solution you put in place — you need the ability to capture substantial context about API usage over time,” Isbitski said.”It also shows how critical it is to architect application environments correctly. Every organization should evaluate the API integrations between its systems and applications, since they directly impact the company’s security posture.”Researchers from the company said they were able to gain access to sensitive data like account numbers, transaction confirmation numbers and other information that would violate GDPR regulations. The report details other actions that could be taken through the vulnerability including the ability to perpetrate a variety of fraudulent activities, extort funds, steal identities and take over accounts. Jon Gaines, senior application security consultant at nVisium, said the Elastic Stack is “notorious for excessive data exposure” and added that a few years ago — and by default — data was exposed publicly. Since then the defaults have changed but he noted that this doesn’t mean that older versions aren’t grandfathered in or that minor configuration changes can’t lead to both of these newly unearthed vulnerabilities. “There are — and have been — multiple open source tools that lead to the discovery of these vulnerabilities that I’ve used previously and continue to use. Unfortunately, the technical barrier of these vulnerabilities is extremely low. As a result, the risk of a bad guy discovering and exploiting these vulnerabilities is high,” Gaines said. “From the outside looking in, these vulnerabilities are common sense for security professionals, authorization, rate limitations, invalidation, parameterized queries, and so forth. However, as a data custodian, administrator, or even developer, oftentimes you aren’t taught to develop or maintain with security in mind.”Vulcan Cyber CEO Yaniv Bar-Dayan added that the most-common cloud vulnerability is caused by human error and misconfigurations, and APIs are not immune.”We’ve all seen exposed customer data and denial of service attacks do significant material damage to hacked targets. Exploitation of this vulnerability is avoidable but must be remediated quickly,” Bar-Dayan said. “Other users of Elastic Stack should check their own implementations for this misconfiguration and not repeat the same mistake.” More