More stories

  • in

    This zero-day Windows flaw opens a backdoor to hackers via Microsoft Word. Here's how to fix it

    Getty Images/iStockphoto Microsoft has detailed a workaround for admins to protect their networks from a zero-day flaw in a Windows tool that hackers have been exploiting via malicious Word documents.  Over the weekend, security researchers discovered a malicious Word document that was uploaded to Google-owned VirusTotal on 25 May from an IP address in Belarus.  […] More

  • in

    CISA adds 75 actively exploited bugs to its must-patch list in just a week

    Plenty to keep the security team busy: the US cybersecurity authority is urging everyone to patch a number of software flaws, including some older ones in Microsoft’s Silverlight plug-in and Adobe Flash Player. The Cybersecurity And Infrastructure Security Agency (CISA) added three batches of must-fix bugs to its catalog of known exploited software vulnerabilities this week. The first covered 21 bugs, the second 20 known exploited bugs and the third covers a further 34. US federal agencies are required to patch the flaws by CISA’s deadline.    Not all of these flaws are at the cutting edge of technology: this lot of patches also includes very old bugs in software like Microsoft Silverlight, which reached end of support in October 2021, and Adobe’s dead Flash Player plugin. All browsers have dropped support for Flash and Flash content, and Microsoft removed Flash from Windows last year. There’s a chance Silverlight may still be floating around government systems as internal legacy applications or websites. Silverlight applications, for example, will still work in IE Mode in modern Edge.    CISA’s latest updates to its known exploited vulnerabilities catalog includes Flash flaws disclosed in 2016 and 2015 and Silverlight flaws dating back to 2013. It also includes older flaws affecting WhatsApp, Kaseya, Mozilla Firefox, Apple’s iOS, and Google Chrome. There are also a number of Windows flaws disclosed between 2015 and 2018, several Internet Explorer bugs from 2014, a Linux kernel privilege escalation flaw from 2014, and several Oracle Java remote code execution bugs dating back to 2010.Despite the age of some of the flaws, it is known that malware operators frequently use exploits for old bugs with the knowledge that some software isn’t patched. HP’s threat researcher this week warned that attackers behind the Snake keylogger were using exploits for a bug in Microsoft’s legacy Equation Editor software (CVE-2017-11882) that was disclosed in 2017. Attackers jumped on that flaw after Microsoft patched it in late 2017. Microsoft removed its functionality from Word in 2018, yet it remains a popular bug to exploit today.      One of the newer ‘must patch’ bugs disclosed in 2022 affected Cisco’s IOS XR software (CVE-2022-20821). Cisco disclosed it last week and gave it a medium severity rating, noting it was aware of “attempted exploitation” of it in the wild in May.    Regardless of the age of most of the bugs, CISA notes that “these types of vulnerabilities are a frequent attack vector for malicious cyber actors and pose significant risk to the federal enterprise.” More

  • in

    Microsoft is rolling out these security settings to protect millions of accounts. Here's what's changing

    To thwart password and phishing attacks, Microsoft is rolling out security defaults to a massive number of Azure Active Directory (AD) users. Microsoft began rolling out security defaults to customers who created a new Azure AD tenant after October 2019, but didn’t enable the defaults for customers that created Azure AD tenants prior to October 2019. Today, Azure AD security defaults are used by about 30 million organizations, according to Microsoft, and over the next month Microsoft will roll out the defaults to many more organizations that will result in the defaults protecting 60 million more accounts. “When complete, this rollout will protect an additional 60 million accounts (roughly the population of the United Kingdom!) from the most common identity attacks,” says Microsoft’s director of identity security, Alex Weinert.  Azure AD is Microsoft’s cloud service for handling identity and authentication to on-premise and cloud apps. It was the evolution of Active Directory Domain Services in Windows 2000. Microsoft introduced secure defaults in 2019 as a basic set of identity security mechanisms for less well-resourced organizations that wanted to boost defenses against password and phishing attacks. It was also aimed at organizations using the free tier of Azure AD licensing, allowing these admins to just toggle on “security defaults” via the Azure portal. Secure defaults wasn’t intended for larger organizations or those already using more advanced Azure AD controls like Conditional Access policies.      As Weinert explains, the defaults were introduced for new tenants to ensure they had “basic security hygiene”, especially multi-factor authentication (MFA) and modern authentication, regardless of the license. The 30 millions organizations that have security defaults in place are far less prone to breaches, he points out.  “These organizations experience 80 percent less compromise than the overall tenant population. Most tenants simply leave it on, while others add even more security with Conditional Access when they’re ready,” says Weinert. The security defaults mean users will face an MFA challenge “when necessary”, based on the user’s location, device, role, and task, according to Weinert. Admins, however, will be need to use MFA every time they sign in. The security default roll out will come first to organizations that aren’t using Conditional Access, haven’t previously used security defaults, and “aren’t actively using legacy authentication clients”. So, one group of customers that won’t be prompted to enable security defaults next month are Exchange Online customers still using legacy authentication. Microsoft wanted to disable legacy authentication for Exchange Online  in 2020, but that was delayed by the pandemic. Now, the deadline for moving Exchange Online to modern authentication is October 1, 2022. Customers can’t request extensions beyond this date, Microsoft’s Exchange Team stressed earlier this month.     Microsoft will notify global admins of eligible Azure AD tenants this month about security defaults through an email. In late June, these admins will see an Outlook notification from Microsoft prompting them to click on “enable security defaults” and a warning that “security defaults will be enabled automatically for your organizations in 14 days”. “Global admins can opt into security defaults right away or snooze for as many as 14 days. They can also explicitly opt out of security defaults in this time,” Weinert says. Once enabled, all users in a tenant will be asked to register for MFA using the Microsoft Authenticator app. Global admins also need to provide a phone number. Microsoft is allowing customers to leave security defaults disabled through the “properties” section of Azure Active Directory properties or the Microsoft 365 admin center. Weinert offers one compelling argument against admins who refuse to enable it. “When we look at hacked accounts, more than 99.9% don’t have MFA, making them vulnerable to password spray, phishing, and password reuse,” he notes.  More

  • in

    Programming languages: How Google is improving C++ memory safety

    Google’s Chrome team is looking at heap scanning to reduce memory-related security flaws in Chrome’s C++ codebase, but the technique creates a toll on memory — except when newer Arm hardware is used.   Google can’t just rip and replace Chromium’s existing C++ code with memory safer Rust, but it is working on ways to improve the memory safety of C++ by scanning heap allocated memory. The catch is that it’s expensive on memory and for now only experimental.Google and Microsoft are major users of and contributors to the fast programming language C++, which is used in projects like  Chromium, Windows, the Linux kernel, and Android. There is growing interest in using Rust because of its memory safety guarantees.  But switching wholesale from C++ in Chrome to a language like Rust simply can’t happen in the near term. “While there is appetite for different languages than C++ with stronger memory safety guarantees, large codebases such as Chromium will use C++ for the foreseeable future,” explain Anton Bikineev, Michael Lippautz and Hannes Payer of Chrome’s security team.   Given this status, Chrome engineers have found ways to make C++ safer to reduce memory-related security flaws such as buffer overflow and use-after free (UAF), which account for 70% of all software security flaws. C++ doesn’t guarantee that memory is always accessed with the latest information of its structure. So, Google’s Chrome team have been exploring the use of a “memory quarantine” and heap scanning to stop the reuse of memory that is still reachable. UAFs make up the majority of high-severity issues affecting the browser. A case in point is this week’s Chrome 102, which fixed one critical UAF, while six of eight high-severity flaws were UAFs.UAF access in heap allocated memory is caused by “dangling pointers”, which occurs when memory used by an application is returned to the underlying system but the pointer points to an out-of-date object. Access through the dangling pointer results in a UAF, which are hard to spot in large code bases.To detect UAFs, Google already uses C++ smart pointers like MiraclePtr, which also caused a performance hit, as well as static analysis in compilers, C++ sanitizers, code fuzzers, and a garbage collector called Oilpan. The appeal of Rust is that its compiler spots pointer mistakes before the code runs on a device, thus avoiding performance penalties. Heap scanning may add to this arsenal if it makes it beyond experimental phase, but adoption will depend on devices using the latest Arm hardware. Google explains how quarantines and heap scanning works: “The main idea behind assuring temporal safety with quarantining and heap scanning is to avoid reusing memory until it has been proven that there are no more (dangling) pointers referring to it. To avoid changing C++ user code or its semantics, the memory allocator providing new and delete is intercepted. Upon invoking delete, the memory is actually put in a quarantine, where it is unavailable for being reused for subsequent new calls by the application, Google said.  “At some point a heap scan is triggered which scans the whole heap, much like a garbage collector, to find references to quarantined memory blocks. Blocks that have no incoming references from the regular application memory are transferred back to the allocator where they can be reused for subsequent allocations.”Google’s heap scanning consists of a set of algorithms it calls StarScan (*Scan). But one version of *Scan caused a memory regressions of 8% in Speedometer2 browser performance benchmark tests. *Scan in the render process regressed memory consumption by about 12%, Google notes. Google then tried hardware-assisted memory tagging via the relatively memory tagging extension (MTE) in ARM v8.5A to reduce performance overheads. The *Scan with MTE benchmark results were promising. After re-doing the *Scan experiments on top of MTE in the renderer process, memory regression was about 2% in Speedometer2. “The experiment also shows that adding *Scan on top of MTE comes without measurable cost,” they wrote. But for now, heap scanning in a way that doesn’t create an unacceptable performance hit remains a thing for the future, when MTE is more widely adopted. “C++ allows for writing high-performance applications but this comes at a price, security. Hardware memory tagging may fix some security pitfalls of C++, while still allowing high performance,” Chrome security team’s conclude.  “We are looking forward to see a more broad adoption of hardware memory tagging in the future and suggest using *Scan on top of hardware memory tagging to fix temporary memory safety for C++. Both the used MTE hardware and the implementation of *Scan are prototypes and we expect that there is still room for performance optimizations.”  More

  • in

    Singapore touts need for AI transparency in launch of test toolkit

    Businesses in Singapore now will be able to tap a governance testing framework and toolkit to demonstrate their “objective and verifiable” use of artificial intelligence (AI). The move is part of the government’s efforts to drive transparency in AI deployments through technical and process checks.Coined A.I. Verify, the new toolkit was developed by the Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC), which administers the country’s Personal Data Protection Act. The government agencies underscored the need for consumers to know AI systems were “fair, explainable, and safe”, as more products and services were embedded with AI to deliver more personalised user experience or make decisions without human intervention. They also needed to be assured that organisations that deploy such offerings were accountable and transparent. Singapore already has published voluntary AI governance frameworks and guidelines, with its Model AI Governance Framework currently in its second iteration.A.I Verify now will allow market players to demonstrate to relevant stakeholders their deployment of responsible AI through standardised tests. The new toolkit currently is available as a minimum viable product, which offers “just enough” features for early adopters to test and provide feedback for further product development. Specifically, it delivers technical testing against three principles on “fairness, explainability, and robustness”, packaging commonly used open-source libraries into one toolkit for self-assessment. These include SHAP (SHapley Additive exPlanations) for explainability, Adversarial Robustness Toolkit for adversarial robustness, and AIF360 and Fairlearn for fairness testing.The pilot toolkit also generates reports for developers, management, and business partners, covering key areas that affect AI performance, testing the AI model against what it claims to do. For example, the AI-powered product would be tested on how the model reached a decision and whether the predicted decision carried unintended bias. The AI system also could be assessed for its security and resilience. The toolkit currently works with some common AI models, such as binary classification and regression algorithms from common frameworks including scikit-learn, Tensorflow, and XGBoost. IMDA added that the test framework and toolkit would enable AI systems developers to conduct self-testing not only to maintain the product’s commercial requirements, but also offer a common platform to showcase these test results. Rather than define ethical standards, A.I. Verify aimed to validate claims made by AI systems developers about their AI use as well as the performance of their AI productsHowever, the toolkit would not provide guarantee that the AI system tested was free from biases or free from security risks, IMDA stressed.It could, though, facilitate interoperability of AI governance frameworks and could help organisations plug gaps between such frameworks and regulations, the Singapore government agency said.It added that it was working with regulations and standards organisations to map A.I. Verify to established AI frameworks, so businesses could offer AI-powered products and services in different global markets. The US Department of Commerce is amongst agencies Singapore was working with to ensure interoperability between their AI governance frameworks.According to IMDA, 10 organisations already had tested and offered feedback on the new toolkit, including Google, Meta, Microsoft, Singapore Airlines, and Standard Chartered Bank. IMDA added that A.I. Verify was aligned with globally accepted principles and guidelines on AI ethics, including those from Europe and OECD that encompassed key areas such as repeatability, robustness, fairness, and societal and environmental wellbeing. The framework also leveraged testing and certification regimes that comprised components such as cybersecurity and data governance.Singapore would look to continue developing A.I. Verify to incorporate international AI governance standards and industry benchmarks, IMDA said. More functionalities also would be gradually added with industry contribution and feedback.In February, the Asian country also released a software toolkit to help financial institutions ensure they were using AI responsibly as well as five whitepapers to guide these companies on assessing their deployment based on predefined principles. Industry regulator Monetary Authority of Singapore (MAS) said the documents detailed methodologies for incorporating the FEAT principles–of Fairness, Ethics, Accountability, and Transparency–into the use of AI within the financial services sector. RELATED COVERAGE More

  • in

    Ed tech wrongfully tracked school children during pandemic: Human Rights Watch

    Globally, students who were required to use government-endorsed education technology (ed tech) during the COVID-19 pandemic had their contact, keystroke, and location data collected and sold to ad tech companies, according to the Human Rights Watch (HRW).A total of 146 of 164 government-endorsed ed tech products endangered the privacy of children, with 199 third-party companies receiving personal data, the HRW said.Further, only 35 endorsed vendors disclosed that user data would be collected for behavioural advertising, whilst a total of 23 products were developed with children as primary users in mind.”In the absence of alternatives, children faced a singular choice whether they were aware of it or not: Attend school and use an ed tech product that infringes upon their privacy, or forgo the product altogether, be marked as absent, and be forced to drop out of school during the pandemic,” the HRW wrote in its report How dare they peep into my private life.The HRW investigation, which began in March 2021, examined the uptake of students using ed tech products as a result of a surge in home learning during pandemic lockdowns — a rise that saw education apps used for an estimated 100 million cumulative hours per week, up 90% from the same period in 2019.Of the products investigated, 39 were mobile apps, 91 were websites, and 34 were available in both formats. Apps running on Google’s Android system were the focus of the report, with the HRW citing it as the “dominant mobile operating system worldwide”. Meta was also caught up in the investigation, with the HRW finding that 31 ed tech websites sent data to Facebook through Facebook Pixel — a technology that collects data, and later facilitates targeted ads on Facebook and Instagram.  Read: YouTube remains in Russia to be an independent news source: CEO  In Australian schools, the HRW investigation concluded the following products had the capability to track students: Minecraft Education Edition, Cisco’s Webex, Education Perfect, Microsoft Teams, Zoom, Webex, and Adobe Connect. Outside of Australia, nine governments including Ghana, India, Indonesia, Iran, Iraq, Russia, Saudi Arabia, Sri Lanka, and Turkey, built and offered 11 education apps that had the capability to collect Android advertising ID from children. An estimated 41 million students and teachers had their privacy put at risk by these apps, according to the HRW.  The HRW made the following recommendations for governments to remedy the privacy breach: Adopt child-specific data protection laws; enact and enforce laws to prevent companies from exploiting the rights of children; ban the profiling of children; and ban behavioural advertising to children among others.The report also recommended changes for technology companies including to stop collecting and processing children’s data for user profiling, and provide child-friendly privacy policies among others.Related Coverage More

  • in

    Meta updates privacy policy with more detail about what data it collects

    Image: Meta Meta said after being “inspired” by user feedback and privacy experts, the company has rewritten its privacy policy “to make it easier to understand”. The updated policy, formerly referred to as its data policy, now provides examples of what information is collected, and how it is used, shared, retained, and transferred, including with […] More

  • in

    How to encrypt your email and why you should

    Data privacy has become absolutely crucial for businesses. And some businesses go to great lengths to protect their data, files, and communications. But consumers and smaller businesses seem to think that adding extra security isn’t worth the extra work required. The problem with this take is anyone who refuses to take the extra steps might find themselves on the wrong end of a data breach.
    ZDNet Recommends
    You might have sent some sensitive information in an innocent email, only to find some bad actor intercepted the message and was able to easily read the content of that email and extract the information. You don’t want that. Even if it does require an extra bit of work on your part, being safe is much better than being sorry. So what do you do? You encrypt your email (or the email containing sensitive information).  What is email encryption? More