More stories

  • in

    Security researchers warn of TCP/IP stack flaws in operational technology devices

    Security vulnerabilities in the communications protocols used by industrial control systems could allow cyber attackers to tamper with or disrupt services, as well access data on the network. Dubbed INFRA:HALT, the set of 14 security vulnerabilities have been detailed by cybersecurity researchers at Forescout Research Labs and JFrog Security Research, who warn that if left unchecked, the flaws could allow remote code execution, denial of service or even information leaks.All the vulnerabilities relate to TCP/IP stacks – communications protocols commonly used in connected devices – in NicheStack, used throughout operational technology (OT) and industrial infrastructure.  Some of the newly uncovered vulnerabilities are more than 20 years old, a common problem in operational technology, which still often runs on protocols developed and produced years ago. Over 200 vendors, including Siemens, use the NicheStack libraries and users are advised to apply the security patches.  Forescout has detailed each of the vulnerabilities in a blog post – they’re related to malformed packet processes which allow an attacker to send instructions to read or write on parts of the memory it shouldn’t. That can crash the device and disrupt networks, as well as allowing attackers to craft shell code to perform malicious actions, including take control of the device. The disclosure of the newly discovered vulnerabilities is the continuation of Project Memoria, Forescout’s research initiative examining vulnerabilities in TCP/IP stacks and how to mitigate them. The INFRA:HALT vulnerabilities were uncovered because of the ongoing research.  All versions of NicheStack before version 4.3, including NicheLite, are affected by the vulnerabilities, which have been disclosed to HCC Embedded, which acquired NicheStack in 2016.  

    SEE: Sensor’d enterprise: IoT, ML, and big data (ZDNet special report) | Download the report as a PDF (TechRepublic)The full extent of vulnerable OT devices is uncertain, but researchers were able to identify over 6,400 vulnerable devices by using Shodan, the Internet of Things search engine. “When you’re dealing with operational technology crashing devices and crashing systems is something that can have various serious consequences. There are also remote code execution possibilities  in these, vulnerabilities which would allow the attacker to take control of a device, and not just crash it but make it behave in a way that it’s not intended to or use it to pivot within the network,” Daniel dos Santos, research manager at Forescout research labs told ZDNet For remote code execution, attackers would need to have detailed knowledge of the systems, but crashing the device is a blunt instrument that’s easier to use and that could significant consequences, especially if the devices help control or monitor critical infrastructure.  Forescout and JFrog Security Research contacted HCC Embedded to disclose the vulnerabilities, as well as contacting CERT as part of the coordinated vulnerability disclosure process. HCC Embedded confirmed that Forescout contacted them about the vulnerabilities and that patches have been released to mitigate them.  “We have been fixing these vulnerabilities over the last six months or so and we have released fixes for every customer who maintains their software,” Dave Hughes, CEO of HCC Embedded told ZDNet, adding that if environments are properly configured, it’s unlikely that attackers could plant code or take control of devices. “These are real vulnerabilities, they are weaknesses in the stack. However, most of them are extremely dependent on how you use the software and how you integrate it as to whether you can experience these things. “If they’ve got a security department that understands DNS poisoning and things like that then they will not be vulnerable at all because they’ve configured things in a safe way,” Hughes said.  Researchers also contacted Coordination agencies including the CERT Coordination Center, BSI (the German Federal Cyber Security Authority), and ICS-CERT (the Industrial Control Systems Cyber Emergency Response Team) about the vulnerabilities. Siemens has also issued an advisory about the vulnerabilities – although only four of the vulnerabilities affect Siemens products.  To help protect operational technology from any kind of cyber attacks, researchers at Forescout recommend that network segmentation is put in place, so operational technology which doesn’t need to be exposed to the internet can’t be remotely discovered – and technology which doesn’t need to be connected to the internet at all is on a separate, air-gapped network. Forescout has released an open-source script to detect devices running NicheStack to help provide visibility onto networks – and help protect them.  MORE ON CYBERSECURITY More

  • in

    Qualys partners with Red Hat to improve Linux and Kubernetes security

    Everyone in the Linux and cloud world knows Red Hat. Everyone who pays attention to security knows Qualys. Now, the two are joining forces to bring Qualys’s Cloud Agent to Red Hat Enterprise Linux (RHEL) CoreOS and Red Hat OpenShift to better secure both systems.

    Qualys Cloud Agent is a lightweight software agent. Typically it uses about 2% of CPU resources with bursts of up to 5%. Once in place, it takes a full configuration assessment of its host while running in the background and uploading that snapshot to the Qualys Cloud  Platform. The agent itself is self-updating and self-healing, so you need never reinstall or reboot it to keep the latest version up and running. OpenShift is, as most of you know, Red Hat’s Kubernetes distribution. CoreOS is Red Hat’s specialized version of RHEL for OpenShift. Besides being its base operating system, CoreOS also underlines OpenShift’s control plane. In this case, the CoreOS Cloud Agent for OpenShift works with Qualys’s Container Security Runtime. This provides continuous discovery of packages and vulnerabilities for the complete OpenShift stack. It does this by placing a lightweight snippet of Qualys code into the container image. Once there, it enables policy-driven monitoring, detection, and blocking of unwanted container behavior at runtime. This eliminates the need for host-based sidecar management and privileged containers. Once instrumented in the image, it will work within each container irrespective of where the container is instantiated and it doesn’t need any additional administration containers. Specifically, the Qualys Cloud Agent for CoreOS on OpenShift brings the following features to OpenShift managers. See the Full Inventory – Continuous visibility of installed software, open ports, and Red Hat Security Advisories (RHSA) for all Red Hat Enterprise Linux CoreOS nodes with comprehensive reporting.  Manage Host Hygiene – Fully integrated on the Qualys Cloud Platform to automatically detect and manage host status related to patches and compliance adherence for known vulnerabilities.  Easily Deploy to the Host – Simplified deployment via the Qualys Cloud Agent to secure the host operating system. This approach eliminates the need to modify the host, open ports, or manage credentials.  Get Complete Coverage – Full coverage of Red Hat OpenShift and Qualys Container security delivers comprehensive visibility from the host operating system through to images and containers running on OpenShift.  Aaron Levey, Red Hat’s Head of Security Partner Ecosystem, said in a statement that, “Qualys’ Cloud Platform and Cloud Agent helps administrators gain deeper visibility into known vulnerabilities that may be present on their Red Hat Enterprise Linux CoreOS nodes with pointers to associated Red Hat Security Advisories, leaning on the expertise of Red Hat as well as Qualys’ own skills in driving cloud-native security.”Sumedh Thakar, Qualys’s president and CEO, added,  “By collaborating with Red Hat, we have built a unique approach to secure Red Hat Enterprise Linux CoreOS that provides complete control over containerized workloads enhancing Qualys’ ability to help customers discover, track, and continuously secure containers.”

    Related Stories: More

  • in

    Google's One Tap lets you sign into websites and apps without a password

    Google has unveiled Google Identity Services, a set of standard interfaces that lets developers integrate Google’s One Tap for faster user sign-ups and simpler sign-in. Google Identity Services aims to make it easier for businesses to gain new users and make life easier for users to sign in. It’s available as a software development kit containing its Identity APIs, including the Sign with Google button as well as the new One Tap prompt. 

    “Sign in with Google and One Tap use secure tokens, rather than passwords, to sign users into partner websites and apps,” says Filip Verley, a product manager on the Google Identity team. SEE: Cloud security in 2021: A business guide to essential tools and best practicesThe easier sign-up and sign-in processes are meant to help end users avoid the pressure of picking convenience over security when deciding on yet another password for an app or website.The One Tap prompt brings the login to the user on the page they’re at on a given website. So, instead of being interrupted when redirected to a landing page, the One Tap prompt slides down from the top right of a website and from the bottom up on a mobile device. “Users can sign in to or sign up using just one tap, without having to remember their credentials or to create a password,” explains Verley. 

    Google has also improved the Sign with Google Button – the buttons users see that show which Google Account they’re signing in to a website with – so that it displays more personalized user details when returning to a site. Reddit has implemented the new Sign in with Google button and the One Tap prompt. Pinterest has also implemented the Google Identity Services APIs. According to Google, Reddit has increased new user sign up and returning user conversion by almost twice.      Key to One Tap sign-ups are ID tokens that are generated for users with Google Accounts on a device. The ID token is shared with the website operator.   SEE: Attacks on critical infrastructure are dangerous. Soon they could turn deadly, warn analysts”When you display the One Tap UI, users are prompted to create a new account with your app using one of the Google Accounts on their device,” Google explains in its developer pages. “If the user chooses to continue, you get an ID token with basic profile information—their name, profile photo, and their verified email address—which you can use to create the new account.”Currently, Google One Tap works with Chrome on Android, macOS, Linux and Windows 10. No mobile browser is supported on iOS. Edge on macOS and Windows 10 is supported, while Firefox is supported on Android, macOS, Linux and Windows. Google notes that One Tap is not supported on Safari for iOS and macOS because of Apple’s Intelligent Tracking Prevention. 
    Google More

  • in

    Hackers target Kubernetes to steal data and processing power. Now the NSA has tips to protect yourself

    The National Security Agency (NSA) has released its first Kubernetes hardening guidance to help organizations deploy the open-source platform for managing containerized applications.The guidance was also authored by the DHS’s Cybersecurity and Infrastructure Security Agency (CISA) to make users aware of key threats and configurations to minimize risk. 

    “Kubernetes is commonly targeted for three reasons: data theft, computational power theft, or denial of service,” the agencies note in a joint announcement. SEE: One third of cybersecurity workers have faced harassment at work or online – this initiative aims to stamp it out”Data theft is traditionally the primary motivation; however, cyber actors may attempt to use Kubernetes to harness a network’s underlying infrastructure for computational power for purposes such as cryptocurrency mining.”Researchers recently warned that attackers were using misconfigured Kubernetes deployments to drop crypto-miners on enterprise hardware.     The key hardening guidance isn’t unusual, but the report also offers an in-depth look at applying standard security mitigations in the context of complex environments that are often deployed in the cloud. At a high-level the guidance includes: scanning containers and pods for vulnerabilities or misconfigurations, running containers and pods with the least privileges possible, and using network separation, firewalls, strong authentication, and log auditing.

    Of course, standard cyber hygiene is key too, including applying patches, updates, and upgrades to minimize risk. They also recommend vulnerability scans to check patches are applied. The advice covers Kubernetes clusters, the control plane, worker nodes (for running containerized apps for the cluster), and pods for containers that are hosted upon these nodes.  The NSA and CISA make a special point about supply chain risks, including software and hardware dependencies that could be compromised at any point in the supply chain before deployment.”The security of applications running in Kubernetes and their third-party dependencies relies on the trustworthiness of the developers and the defense of the development infrastructure. A malicious container or application from a third party could provide cyber actors with a foothold in the cluster,” the agencies note. SEE: Ransomware: Only half of organisations can effectively defend against attacks, warns reportThe report also warns that remote attackers do target control plane components lacking appropriate access controls, as well as worker nodes that live outside of the locked down control plane. Insider threats include admins with high privileges and physical access to systems or hypervisors. Pods in particular need to be hardened against exploitation because they’re often an attacker’s initial execution environment after exploiting a container. It also recommends running non-root containers and rootless container engines to prevent root execution as many container services, by default, run as the privileged root user.  More

  • in

    Facebook brings Snapchat-like view once photo and video feature to WhatsApp

    Image: Facebook
    Facebook has announced it is rolling out a new view once feature that it says will give users “more control over their privacy”.When photos and videos are shared on WhatsApp, they are automatically saved to a recipient’s camera roll. The view once feature allows users to send photos and videos that disappear from a WhatsApp chat after a recipient has opened it once. Users will be unable to forward, save, star, or share the media that was sent as a view once media. When the media has been viewed, the message will appear as “opened”, which Facebook said will “help avoid any confusion about what was happening in the chat at the time”. Users, however, will only be able to see if a recipient has opened a view once photo or video if they have read receipts turned on. Media that is shared using the view once feature will be marked with a “one-time” icon. If a view once photo or video is not opened within 14 days of being sent, the media will expire from that chat. However, it can be restored from backup if the message is unopened at the time of back up. If the photo or video has already been opened, Facebook said the media will not be included in the backup and cannot be restored.The company assured that like all personal messages sent on WhatsApp, view once media is “protected by the platform’s end-to-end encryption”. But like all encrypted media on WhatsApp, it “may be stored for a few weeks on WhatsApp’s servers” after it’s been sent. “While taking photos or videos on our phones has become such a big part of our lives, not everything we share needs to become a permanent digital record. On many phones, simply taking a photo means it will take up space in your camera roll forever,” the company said in a blog post.

    “That’s why today we’re rolling out new View Once photos and videos that disappear from the chat after they’ve been opened, giving users even more control over their privacy.”For example, you might send a View Once photo of some new clothes you’re trying on at a store, a quick reaction to a moment in time, or something sensitive like a Wi-Fi password.”Facebook introduced a similar feature it called Vanish Mode to Messenger and Instagram at the end of last year.Related Coverage More

  • in

    Akamai reports Q2 revenue, EPS above expectations, shares slip

    Bandwidth provider Akamai Technologies this afternoon reported Q2 revenue and profit that both topped expectations, driving by a 25% rise in security revenue. The company’s forecast for the current quarter and for the full year was also above Wall Street’s expectations.Despite the upbeat results and outlook, the report sent Akamai shares down 4% in late trading. CEO Tom Leighton called the results “excellent,” remarking that the “performance was highlighted by continued strong growth across our security solutions globally.” Added Leighton, “As the internet has become increasingly critical to consumers and businesses, our customers have turned to us more than ever to power and protect exceptional online experiences.”Revenue in the three months ended in June rose 7%, year over year, to $853 million, yielding a net profit of $1.42 a share.Analysts had been modeling $846 million and $1.38 per share.Revenue from Akamai’s security business rose by 25%, year over year, to $325 million, while its “edge technology” revenue declined by 1% to $528 million.

    International revenue was up 15% in the quarter, while Akamai’s U.S. revenue rose 1%. The report follows issues last month with Akamai’s DNS servers that led to outages at major customers of Akamai, including Amazon Web Services, Microsoft, Delta Airlines, Oracle, Capital One, and AT&T. For the current quarter, the company sees revenue of $845 million to $860 million, and EPS in a range of $1.37 to $1.41. That compares to consensus for $845 million and a $1.35 profit per share.For the full year, the company sees revenue in a range of $3.42 billion to $3.45 billion, and EPS of $5.45 to $5.65. That compares to consensus of $3.43 billion and a $5.53 profit per share.Also: McAfee, Akamai Q1 reports top expectations on security technology growth 

    Tech Earnings More

  • in

    It's time to improve Linux's security

    Is Linux more secure than Windows? Sure. But that’s a very low bar. Kees Cook, a Linux security expert, Debian Linux developer, and Google Security Engineer, is well aware that Linux could be more secure. As Cook tweeted, “We need more investment in bug fixers, reviewers, testers, infrastructure builders, toolchain devs, and security devs.”

    Open Source

    Cook details what he means in his Google Security Blog, “Linux Kernel Security Done Right.” Cook wrote, “the Linux kernel runs well: when driving down the highway, you’re not sprayed in the face with oil and gasoline, and you quickly get where you want to go. However, in the face of failure, the car may end up on fire, flying off a cliff.” This is true. With great power comes great responsibility. You can do almost anything with Linux, but you can also completely ruin your Linux system with a single command. And, that’s only the ultra-powerful commands, which you should only use with the greatest of caution. Cook is referring to the other, far less visible security problems buried deep in Linux. As Cook said, while Linux enables us to do amazing things, “What’s still missing, though, is sufficient focus to make sure that Linux fails well too. There’s a strong link between code robustness and security: making it harder for any bugs to manifest makes it harder for security flaws to manifest. But that’s not the end of the story. When flaws do manifest, it’s important to handle them effectively.” That isn’t easy, as Cook points out. Linux is written in C, which means “it will continue to have a long tail of associated problems. Linux must be designed to take proactive steps to defend itself from its own risks. Cars have seat belts not because we want to crash, but because it is guaranteed to happen sometimes.” While moving forward, some of Linux will be written in the far safer Rust, C will remain the foundation of Linux for at least another generation. That means Cook continued, “though everyone wants a safe kernel running on their computer, phone, car, or interplanetary helicopter, not everyone is in a position to do something about it. Upstream kernel developers can fix bugs but have no control over what downstream vendors incorporate into their products. End-users choose their products but don’t usually have control over what bugs are fixed or what kernel is used. Ultimately, vendors are responsible for keeping their product’s kernels safe.” This is difficult. Cook observed, “the stable kernel releases (“bug fixes only”) each contain close to 100 new fixes per week. Faced with this high rate of change, a vendor can choose to ignore all the fixes, pick out only ‘important’ fixes, or face the daunting task of taking everything.”

    Believe it or not, many vendors, especially in the Internet of Things (IoT), choose not to fix anything. Sure, they could do it. Several years ago, Linus Torvalds, Linux’s creator, pointed out that “in theory, open-source [IoT devices] can be patched. In practice, vendors get in the way.” Cook remarked, with malware here, botnets there, and state attackers everywhere, vendors certainly should protect their devices, but, all too often, they don’t. “Unfortunately, this is the very common stance of vendors who see their devices as just a physical product instead of a hybrid product/service that must be regularly updated.” Linux distributors, however, aren’t as neglectful. They tend to “‘cherry-pick only the ‘important’ fixes. But what constitutes ‘important’ or even relevant? Just determining whether to implement a fix takes developer time.”  It hasn’t helped any that Linus Torvalds has sometimes made light of security issues. For example, in 2017, Torvalds dismissed some security developers’ [as] “f-cking morons.” He didn’t mean to put all security developers in the same basket, but his colorful language set the tone for too many Linux developers. So it was that David A. Wheeler, The Linux Foundation’s director of open-source supply chain security, said in the Report on the 2020 FOSS Contributor Survey that “it is clear from the 2020 findings that we need to take steps to improve security without overburdening contributors.” 

    In Linux distributor circles, Cook continued, “The prevailing wisdom has been to choose vulnerabilities to fix based on the Mitre Common Vulnerabilities and Exposures (CVE) list.” But this is based on the faulty assumption that “all-important flaws (and therefore fixes) would have an associated CVE.” They don’t. Therefore,  given the volume of flaws and their applicability to a particular system, not all security flaws have CVEs assigned, nor are they assigned in a timely manner. Evidence shows that for Linux CVEs, more than 40% had been fixed before the CVE was even assigned, with the average delay being over three months after the fix. Some fixes went years without having their security impact recognized. On top of this, product-relevant bugs may not even classify for a CVE. Finally, upstream developers aren’t actually interested in CVE assignments; they spend their limited time actually fixing bugs. In short, if you rely on cherry-picking CVEs, you’re “all but guaranteed to miss important vulnerabilities that others are actively fixing, which is almost worse than doing nothing since it creates the illusion that security updates are being appropriately handled.” Cook continued:  So what is a vendor to do? The answer is simple, if painful: continuously update to the latest kernel release, either major or stable. Tracking major releases means gaining security improvements along with bug fixes, while stable releases are bug fixes only. For example, although modern Android phones ship with kernels that are based on major releases from almost two to four years earlier, Android vendors do now, thankfully, track stable kernel releases. So even though the features being added to newer major kernels will be missing, all the latest stable kernel fixes are present. Performing continuous kernel updates (major or stable) understandably faces enormous resistance within an organization due to fear of regressions — will the update break the product? The answer is usually that a vendor doesn’t know, or that the update frequency is shorter than their time needed for testing. But the problem with updating is not that the kernel might cause regressions; it’s that vendors don’t have sufficient test coverage and automation to know the answer. Testing must take priority over individual fixes.  How can software vendors possibly do that? Cook considers it a painful, but in the end, “simple resource allocation problem, and is more easily accomplished than might be imagined: downstream redundancy can be moved into greater upstream collaboration.” What does that mean? Cook explained: With vendors using old kernels and backporting existing fixes, their engineering resources are doing redundant work. For example, instead of 10 companies each assigning one engineer to backport the same fix independently, those developer hours could be shifted to upstream work where 10 separate bugs could be fixed for everyone in the Linux ecosystem. This would help address the growing backlog of bugs. Looking at just one source of potential kernel security flaws, the syzkaller dashboard shows the number of open bugs is currently approaching 900 and growing by about 100 a year, even with about 400 a year being fixed. He makes an excellent point. I know dozens of developers who spend their days porting changes from the stable kernel into their distribution-specific kernels. It’s useful work, but Cook’s right; much of it consists of duplicating efforts.   In addition, Cook suggests that “Beyond just squashing bugs after the fact, more focus on upstream code review will help stem the tide of their introduction in the first place, with benefits extending beyond just the immediate bugs caught. Capable code review bandwidth is a limited resource. Without enough people dedicated to upstream code review and subsystem maintenance tasks, the entire kernel development process bottlenecks.” This is a known problem. One major reason why the University of Minnesota playing security games with the Linux kernel developers annoyed the programmers so much was that it wasted their time. And, as Greg Kroah-Hartmann,  the Linux kernel maintainer for the stable branch, tartly observed, “Linux kernel developers do not like being experimented on; we have enough real work to do.”  Amen. The Linux kernel maintainers must oversee hundreds, even thousands, of code updates a week. As Cook remarked, “long-term Linux robustness depends on developers, but especially on effective kernel maintainers. … Maintainers are built not only from their depth of knowledge of a subsystem’s technology but also from their experience with the mentorship of other developers and code reviews. Training new reviewers must become the norm, motivated by making the upstream review part of the job. Today’s reviewers become tomorrow’s maintainers. If each major kernel subsystem gained four more dedicated maintainers, we could double productivity.” Besides simply adding more reviewers and maintainers, Cook also thinks, “improving Linux’s development workflow is critical to expanding everyone’s ability to contribute. Linux’s ’email only’ workflow is showing its age. Still, the upstream development of more automated patch tracking, continuous integration, fuzzing, coverage, and testing will make the development process significantly more efficient.” And, as DevOps continuous integration and delivery (CI/CD) users know, shifting testing into the early stages of development is much more efficient. Cook observed, “it’s more effective to test during development. When tests are performed against unreleased kernel versions (e.g. Linux-next) and reported upstream, developers get immediate feedback about bugs. Fixes can be developed before a flaw is ever actually released; it’s always easier to fix a bug earlier than later.”

    But, there’s still more to be done. Cook believes we “need to proactively eliminate entire classes of flaws, so developers cannot introduce these types of bugs ever again. Why fix the same kind of security vulnerability 10 times a year when we can stop it from ever appearing again?” This is already being done in the Linux kernel. For example, “Over the last few years, various fragile language features and kernel APIs have been eliminated or replaced (e.g. VLAs, switch fallthrough, addr_limit). However, there is still plenty more work to be done. One of the most time-consuming aspects has been the refactoring involved in making these usually invasive and context-sensitive changes across Linux’s 25 million lines of code.” It’s not just the code that needs cleaning of inherent security problems. Cook wants “the compiler and toolchain … to grow more defensive features (e.g. variable zeroing, CFI, sanitizers). With the toolchain technically “outside” the kernel, its development effort is often inappropriately overlooked and underinvested. Code safety burdens need to be shifted as much as possible to the toolchain, freeing humans to work in other areas. On the most progressive front, we must make sure Linux can be written in memory-safe languages like Rust.” So, what can you do to help this process? Cook proclaimed you shouldn’t wait for another minute:   If you’re not using the latest kernel, you don’t have the most recently added security defenses (including bug fixes). In the face of newly discovered flaws, this leaves systems less secure than they could have been. Even when mediated by careful system design, proper threat modeling, and other standard security practices, the magnitude of risk grows quickly over time, leaving vendors to do the calculus of determining how old a kernel they can tolerate exposing users to. Unless the answer is ‘just abandon our users,’ engineering resources must be focused upstream on closing the gap by continuously deploying the latest kernel release. Specifically, Cook concluded, “Based on our most conservative estimates, the Linux kernel and its toolchains are currently underinvested by at least 100 engineers, so it’s up to everyone to bring their developer talent together upstream. This is the only solution that will ensure a balance of security at reasonable long-term cost.” So are you ready for the challenge? I hope so. Linux is far too important across all of the technology for us not to do our best to protect it and harden its security. 
    Related Stories: More

  • in

    Bugs in Chrome's JavaScript engine can lead to powerful exploits. This project aims to stop them

    A new project hopes to beef up the security of V8, a part of the Chrome browser that most users aren’t aware of but that hackers increasingly see as a juicy target.JavaScript makes the web go around and Google has had to patch multiple zero-day or previously unknown flaws in Chrome’s V8 JavaScript engine this year. In April, Google admitted a high severity bug in V8 tracked as CVE-2021-21224 was being exploited in the wild. Chrome has over two billion users, so when zero-day exploits strike Chrome, it’s a big deal. V8, an open source Google project, is a powerful JavaScript engine for Chrome that’s helped advance the web and web applications. V8 also powers the server-side runtime Node.js.    Now Samuel Groß, a member of the Google Project Zero security researchers team, has detailed a V8 sandbox proposal to help protect its memory from nastier bugs in the engine using virtual machine and sandboxing technologies. “V8 bugs typically allow for the construction of unusually powerful exploits. Furthermore, these bugs are unlikely to be mitigated by memory safe languages or upcoming hardware-assisted security features such as MTE or CFI,” explains Groß, referring to security technologies like Microsoft’s Control-flow integrity (CFI) and Intel’s control-flow enforcement technologies (CET). ” As a result, V8 is especially attractive for real-world attackers.”Groß’s comments suggest that even adopting a memory-safe language like Rust — which Google has adopted for new Android code — wouldn’t immediately solve the security problems faced by V8, which is written in C++.  

    He also outlines the broad design objectives but, signaling the size of the project, stresses that this sandbox project is in its infancy and that there are some big hurdles to overcome. But V8 is a Google-led open source project and given that V8 has been the source of security vulnerabilities in Chrome, there is a chance that member of GPZ’s proposal could make it across the line.The issues affect how browser software interacts with hardware beyond the operating system and aims to prevent future flaws in V8 from corrupting a computer’s memory outside of the V8 heap. This would allow an attacker to execute malicious code. One consideration for the additional security protections for V8 is the impact on hardware performance. Groß estimates his proposal would cause an overhead of about “1% overall on real-world workloads”. Groß explains the problem with V8 that stems from JIT compilers that can be used trick a machine into emitting machine code that corrupts memory at runtime. “Many V8 vulnerabilities exploited by real-world attackers are effectively 2nd order vulnerabilities: the root-cause is often a logic issue in one of the JIT compilers, which can then be exploited to generate vulnerable machine code (e.g. code that is missing a runtime safety check). The generated code can then in turn be exploited to cause memory corruption at runtime.”He also highlights the shortcomings of the latest security technologies, including hardware-based mitigations, that will make V8 an attractive target for years to come and hence is why V8 may need a sandbox approach. These include:The attacker has a great amount of control over the memory corruption primitive and can often turn these bugs into highly reliable and fast exploitsMemory safe languages will not protect from these issues as they are fundamentally logic bugsDue to CPU side-channels and the potency of V8 vulnerabilities, upcoming hardware security features such as memory tagging will likely be bypassable most of the timeDespite downplaying the likelihood of the new V8 sandbox actually being adopted, the researcher seems upbeat about its prospects for doing its intended job by requiring an attacker chain together two separate vulnerabilities in order to execute code of their choice. “With this sandbox, attackers are assumed to be able to corrupt memory inside the virtual memory cage arbitrarily and from multiple threads, and will now require an additional vulnerability to corrupt memory outside of it, and thus to execute arbitrary code,” he wrote. More