More stories

  • in

    Is LiDAR on its way out? The business case for saying goodbye

    Pixabay

    Among the deluge of robotics predictions you’re bound to encounter this year, there’s one you should pay particular attention to: The way robots “see” is fundamentally changing, and that’s going to have a huge impact on the utility cost and proliferation of robotic systems.Of course, it’s a bit of a mischaracterization to talk about robots “seeing,” or at least a reductive shorthand for a complex interplay of software and hardware that’s allowing robots to do much more sophisticated sensing with much less costly equipment. Machine vision incorporates a variety of technologies and increasingly relies on software in the form of machine learning and AI to interpret and process data from 2D sensors that would have been unachievable even a short time ago.With this increasing reliance on software comes an interesting shift away from highly specialized sensors like LiDAR, long a staple for robots operating in semi-structured and unstructured environments. Robotics experts marrying the relationship between humans and AI software are coming to find that LiDAR isn’t actually necessary. Rather, machine vision is providing higher quality mappingat a more affordable cost, especially when it comes to indoor robotics and automation.See also: 2022: A major revolution in robotics.To learn more about the transformation underway, I connected with Rand Voorhies, CTO & co-founder at inVia Robotics, about machine vision, the future of automation, and whether LiDAR is still going to be a foundational sensor for robots in the years ahead.GN: Where have the advances come in machine vision, the sensors or the software?Rand Voorhies: While 2D imaging sensors have indeed seen constant continuous improvement, their resolution/noise/quality has rarely been a limiting factor to the widespread adoption of machine vision. While there have been several interesting sensor improvements in the past decade (such as polarization sensor arrays and plenoptic/light-field cameras), none have really gained traction, as the main strengths of machine vision sensors are their cost and ubiquity. The most groundbreaking advancement has really been along the software front through the advent of deep learning. Modern deep learning machine vision models seem like magic compared to the technology from ten years ago. Any teenager with a GPU can now download and run object recognition libraries that would have blown the top research labs out of the water ten years ago. The fact of the matter is that 2D imaging sensors capture significantly more data than a typical LiDAR sensor – you just have to know how to use it.

    While cutting-edge machine vision has been improving in leaps and bounds, other factors have also contributed to the adoption of even simpler machine vision techniques. The continual evolution of battery and motor technology has driven component costs down to the point where robotic systems can be produced that provide a very strong ROI to the end-user. Given a good ROI, customers (in our case, warehouse operators) are happy to annotate their environment with “fiducial” stickers. These stickers are almost like a cheat-code to robotics, as very inexpensive machine vision solutions can detect the position and orientation of a fiducial sticker with ultra-precision. By sticking these fiducials all over a warehouse, robots can easily build a map that allows them to localize themselves.GN: Can you give a little context on LiDAR adoption? Why has it become such a standardized sensing tool in autonomous mobility applications? What were the early hurdles to machine vision that led developers to LiDAR?Rand Voorhies: Machine vision has been used to guide robots since before LiDAR existed. LiDAR started gaining significant popularity in the early 2000s due to some groundbreaking academic research from Sebastian Thrun, Daphne Koller, Michael Montemerlo, Ben Wegbreit, and others that made processing data from these sensors feasible. That research and experience led to the dominance of the LiDAR-based Stanley autonomous vehicle in the DARPA Grand Challenge (led by Thrun), as well as to the founding of Velodyne (by David Hall, another Grand Challenge participant), which produces what many now consider to be the de-facto autonomous car sensor. The Challenge showed that LiDAR was finally a viable technology for fast-moving robots to navigate through unknown, cluttered environments at high speeds. Since then, there has been a huge increase in academic interest in improving algorithms for processing LiDAR sensor data, and there have been hundreds of papers published and PhDs minted on the topic. As a result, graduates have been pouring into the commercial space with heaps of academic LiDAR experience under their belt, ready to put theory to practice.In many cases, LiDAR has proven to be very much the right tool for the job. A dense 3D point cloud has long been the dream of roboticists and can make obstacle avoidance and pathfinding significantly easier, particularly in unknown dynamic environments. However, in some contexts, LiDAR is simply not the right tool for the job and can add unneeded complexity and expense to an otherwise simple solution. Determining when LiDAR is right and when it’s not is key to building robotic solutions that don’t just work — they also provide positive ROI to the customer.At the same time, machine vision has advanced as well. One of the early hurdles in machine vision can be understood with a simple question: “Am I looking at a large object that’s far away or a tiny object that’s up-close”? With traditional 2D vision, there was simply no way to differentiate. Even our brains can be fooled, as seen in funhouse perspective illusions. Modern approaches to machine vision use a wide range of approaches to overcome this, including:Estimating the distance of an object by understanding the larger context of the scene, e.g., I know my camera is 2m off the ground, and I understand that car’s tires are 1000 pixels along the street, so it must be 25m away.Building a 3D understanding of the scene by using two or more overlapping cameras (i.e., stereo vision).Building a 3D understanding of the scene by “feeling” how the camera has moved, e.g., with an IMU (inertial measurement unit – sort of like a robot’s inner ear) and correlating those movements with the changing images from the camera.Our own brains use all three of these techniques in concert to give us a rich understanding of the world around us that goes beyond simply building a 3D model.GN: Why is there a better technological case for machine vision over LiDAR for many robotics applications?Rand Voorhies: LiDAR is well suited for outdoor applications where there are a lot of unknowns and inconsistencies in terrain. That’s why it’s the best technology for self-driving cars. In indoor environments, machine vision makes the better technological case. As light photons are bouncing off objects within a warehouse, robots can easily get confused under the direction of LiDAR. They have a difficult time differentiating, for example, a box of inventory from a rack of inventory — both are just objects to them. When the robots are deep in the aisles of large warehouses, they often get lost because they can’t differentiate their landmarks. Then they have to be re-mapped.By using machine vision combined with fiducial markers, our inVia Picker robots know exactly where they are at any point in time. They can “see” and differentiate their landmarks. Nearly all LiDAR-based warehouse/industrial robots require some fiducial markers to operate. Machine vision-based robots require more markers. The latter requires additional time and cost to deploy long rolls of stickers vs fewer individual stickers, but when you factor in the time and cost to perform regular LiDAR mapping, the balance swings far in the favor of pure vision. At the end of the day, 2D machine vision in warehouse settings is cheaper, easier, and more reliable than LiDAR.If your use of robots does not require very high precision and reliability, then LiDAR may be sufficient. However, for systems that cannot afford any loss in accuracy or uptime, machine vision systems can really show their strengths. Fiducial-based machine vision systems allow operators to put markers exactly where precision is required. With inVia’s system that is picking and placing totes off of racking, placing those markers on the totes and the racking provides millimeter level accuracy to ensure that every tote is placed exactly where it’s supposed to go without fail. Trying to achieve this with a pure LiDAR system would be cost and time prohibitive for commercial use.GN: Why is there a better business case?Rand Voorhies: On the business side, the case is simple as well. Machine vision saves money and time. While LiDAR technology has decreased in cost over the years, it’s still expensive. We’re committed to finding the most cost-effective technologies and components for our robots in order to make automation accessible to businesses of any size. At inVia we’re driven by an ethos of making complex technology simple. The difference in the time it takes to fulfill orders with machine vision versus with LiDAR and all of its re-mapping requirements is critical. It can mean the difference in getting an order to a customer on time or a day late. Every robot that gets lost due to LiDAR re-mapping reduces that system’s ROI. The hardware itself is also cheaper when using machine vision. Cameras are cheaper than LiDAR, and most LiDAR systems need cameras with fiducials anyway. With machine vision, there’s an additional one-time labor cost to apply fiducials. However, applying fiducials one time to totes/racking is extremely cheap labour-wise and results in a more robust system with less downtime and errors. GN: How will machine vision change the landscape with regards to robotics adoption in sectors such as logistics and fulfillment?Rand Voorhies: Machine vision is already making an impact in logistics and fulfillment centers by automating rote tasks to increase the productivity of labor. Warehouses that use robots to fulfill orders can supplement a scarce workforce and let their people manage the higher-order tasks that involve decision-making and problem-solving. Machine vision enables fleets of mobile robots to navigate the warehouse, performing key tasks like picking, replenishing, inventory moves, and inventory management. They do this without disruption and with machine-precision accuracy. Using robotics systems driven by machine vision is also removing barriers to adoption because of their affordability. Small and medium-sized businesses that used to be priced out of the market for traditional automation are able to reap the same benefits of automating repetitive tasks and, therefore, grow their businesses.GN: How should warehouses go about surveying the landscape of robotics technologies as they look to adopt new systems?Rand Voorhies: There are a lot of robotic solutions on the market now, and each of them uses very advanced technology to solve a specific problem warehouse operators are facing. So, the most important step is to identify your biggest challenge and find the solution that solves it. For example, at inVia we have created a solution that specifically tackles a problem that is unique to e-commerce fulfillment. Fulfilling e-commerce orders requires random access to a high number of different SKUs in individual counts. That’s very different from retail fulfillment, where you’re retrieving bulk quantities of SKUs and shipping them out in cases and/ or pallets. The two operations require very different storage and retrieval setups and plans. We’ve created proprietary algorithms that specifically create faster paths and processes to retrieve randomly accessed SKUs.E-commerce is also much more labor-dependent and time-consuming, and, therefore, costly. So, those warehouses want to adopt robotics technologies that can help them reduce the cost of their labor, as well as the time it takes to get orders out the door to customers. They have SLAs (service level agreements) that dictate when orders need to be picked, packed, and shipped. They need to ask vendors how their technology can help them eliminate blocks to meet those SLAs. More

  • in

    FlexBooker apologizes for breach of 3.7 million user records, partial credit card information

    Scheduling platform FlexBooker apologized this week for a data breach that involved the sensitive information of 3.7 million users. In a statement, the company told ZDNet a portion of its customer database had been breached after its AWS servers were compromised on December 23. FlexBooker said their “system data storage was also accessed and downloaded” as part of the attack. They added they worked with Amazon to restore a backup and they were able to bring operations back in about 12 hours. “We sent a notification to all affected parties and have worked with Amazon Web Services, our hosting provider, to ensure that our accounts are re-secured,” a spokesperson said. “We deeply apologize for the inconvenience caused by this issue.”The spokesperson said the data was “limited to names, email addresses, and phone numbers” and a website notifying customers of the breach says the same thing. But Australian security expert Troy Hunt, who runs the Have I Been Pwned site that tracks breached information, said the trove of stolen data included password hashes and partial credit card information for some accounts. Hunt added that the data “was found being actively traded on a popular hacking forum.”A FlexBooker spokesperson confirmed Hunt’s report, telling ZDNet that the last 3 digits of card numbers were included in the breach but not the full card information, expiration date, or CVV.  

    Reporters from Bleeping Computer said the group behind the attack, Uawrongteam, leaked information from FlexBooker and two other companies on a hacking forum. They tied the breach to a DDoS attack that FlexBooker reported on December 23. In their log of the attack, FlexBooker said the attack caused widespread outages of their core application functionality and required help from AWS to solve. “We have been informed that this should not have been possible, but before they were able to assist technically, they had to ensure that all our security practices were correct. They have completed this step, and this has now gone to their leadership team who have approved dedicating technical resources to this immediately,” FlexBooker said of the assistance from AWS on December 24. “We truly apologize again for the impact here. We have been on the phone with AWS support for 7 hours now, trying to push them through. A brute force attack such as this should not have been possible, so we are pushing them hard to put a network-level solution in place to ensure this is both resolved quickly and also permanently so this never happens again in the future.”The issue was resolved about eight hours later. Shared Assessments’ Nasser Fattah said he has seen instances where DDoS attacks are sometimes launched as a distraction to disrupt vital business services while the adversary’s primary goal is to gain access and exfiltrate sensitive information. “We know that there are financial losses associated with system outages, hence, why security teams have all eyes on glass, so to speak, when there is a DDoS attack,” Fattah said. “And when this happens, it is important to be prepared for the possibility of a multifaceted attack and be very diligent with monitoring other anomalies happening on the network.” More

  • in

    Ransomware attack on FinalSite still disrupting email services at thousands of schools

    Education technology company FinalSite is still in the process of recovering from a devastating ransomware attack that crippled many of the services they provide to thousands of schools across the world this week. 

    In an update on Friday morning, the company said the “vast majority” of its sites are back up and running on the front end, but many systems are still facing a variety of issues.They urged their customers — which include thousands of schools across 115 different countries — to limit “software usage to critical information updates for your front-end” until they have confirmed that all functionality is working fully. “Examples of usage to avoid include sending email/notifications, workflows, relying on calendar and athletic alerts, uploading data etc.,” the company said. While some front end systems are back, FinalSite said some styling may be missing, and users may not be able to access the admin side of their site. Many users will continue to see 503 errors, according to FinalSite. The company first informed customers of issues on January 4 and said its engineers have been working around the clock to resolve the issue. By Thursday, the company admitted that it was suffering from a ransomware attack.”We are incredibly sorry for this prolonged outage and fully realize the stress it is causing your organizations. While we have made progress overnight to get all websites up and running, full restoration has taken us longer than anticipated,” they wrote in a message to customers. 

    “In the ensuing time since the incident, our security, infrastructure, and engineering teams have been working around the clock to restore backup systems and bring our network back to full performance, in a safe and secure manner. Third-party forensic specialists are assisting us in bringing things back slowly and carefully to ensure the environment is safe and stable.”One Reddit user said about 2,200 school websites hosted by Finalsite began to go down on January 4.  “Many districts are complaining that they are unable to use their emergency notification system to warn their communities about closures due to weather or COVID-19 protocol,” the user wrote. “The impact of this outage is far greater than the attention it has received.”A FinalSite spokesperson later told TechCrunch that about 5,000 of their 8,000 customers were affected by the ransomware incident. Local news outlets across the US reported school districts having issues with their websites. Another school administrator contacted Bleeping Computer to report that their website was down, forcing them to contact parents about the outage. They were told that there is no timetable for services to return to normal.Some schools took to Twitter to inform students and parents about website outages, noting to the public that their websites were down because of the ransomware attack on FinalSite. Former FBI analyst Crane Hassold likened the attack to the ransomware incident that affected Kaseya and said it illustrated the domino effect ransomware can have on other companies.”When a company that provides solutions for other companies gets hit with ransomware, similar to what we saw with Kaseya last summer, the resulting impact can be exponentially devastating,” said Hassold, who now serves as director of threat intelligence at Abnormal Security. “In the current environment, when COVID is peaking again, and many schools are switching to temporary remote learning, this attack couldn’t have come at a worse time.” More

  • in

    Log4j flaw: Attackers are targeting Log4Shell vulnerabilities in VMware Horizon servers, says NHS

    The UK’s National Health Service (NHS) has issued a warning that hackers are actively targeting Log4J vulnerabilities and is recommending that organisations within the health service apply the necessary updates in order to protect themselves. An advisory by NHS Digital says that an ‘unknown threat group’ is attempting to exploit a Log4j vulnerability (CVE-2021-44228) in VMware Horizon servers to establish web shells which could be use to distribute malware, ransomware, steal sensitive information and other malicious attacks. It’s unclear if the warning has been issued because attacks targeting NHS systems have been detected, or if the advisory has been released as a general precaution because of the ongoing problem of the critical security vulnerability in Java logging library Apache Log4j which was disclosed in December. “We are aware of an exploit and are actively monitoring the situation. We will support our partners with the system response to this critical vulnerability and will continue to provide guidance to NHS organisations,” an NHS spokesperson told ZDNet. The attacks being warned against exploit the Log4Shell vulnerability in the Apache Tomcat service embedded within VMware Horizon. Once the weaknesses have been identified, the attack uses the Lightweight Directory Access Protocol (LDAP) to execute a malicious Java file that injects a web shell into the VM Blast Secure Gateway service If successfully exploited, attackers can establish persistence on the affected networks and use this to carry out a number of malicious activities. NHS Digital recommends that organisations known to be running Horizon servers take the appropriate action and apply the necessary patches in order to ensure networks can resist attempted attacks. 

    “Affected organisations should review the VMware Horizon section of the VMware security advisory VMSA-2021-0028 and apply the relevant updates or mitigations immediately,” said the alert. Log4j is used in many forms of enterprise and open-source software, including cloud platforms, web applications and email services, meaning that there’s a wide range of software in organisations around the world which could be at risk from attempts to exploit the vulnerability. Cyber criminals were quick to scan for vulnerable systems after the vulnerability was disclosed and a variety of cyber criminals and many took the opportunity to launch attacks including malware and ransomware campaigns. Attackers are still actively exploiting the vulnerability, Microsoft has warned. It’s feared that the widespread use of Log4j in open-source software – to the extent that there’s the potential that organisations may not know it’s even part of the ecosystem – could result in the vulnerability being a problem for years to come. The UK’s National Cyber Security Centre (NCSC) is among those which have issued advice to organisations on how to manage Log4j vulnerabilities in the long run. MORE ON CYBERSECURITY More

  • in

    Google acquisition of Siemplify is a knockout punch for standalone SOAR

    Google announced the acquisition of Siemplify, a security orchestration, automation, and response (SOAR) tool, this past Monday. Google Cloud’s acquisition of a SOAR tool in and of itself is not surprising — this has been a missing piece for its Chronicle offering that other security analytics platforms have built-in for the past several years. 

    What is interesting, however, is the timing of this acquisition, which comes years after the spate of SOAR acquisitions from 2018-2019. Siemplify was one of the few remaining holdouts as a standalone SOAR, as most other independent SOAR vendors were acquired or diversified their portfolio with other products such as threat intelligence platforms (TIPs). In some ways, that makes this a heady acquisition, as it signals the true end of the standalone SOAR. Forrester predicted early on that the SOAR market could not stand on its own, and given that that was five years ago, it’s starting to feel like we are belaboring the point. The bottom line is this: The SIEM has irrevocably been altered to the more holistic security analytics platform, incorporating SIEM, SOAR, and SUBA in a single offering. Just offering a piece of the puzzle — a SOAR, a SIEM, or SUBA — is not enough. Security teams want a unified security analytics platform that they can use through the entire incident response lifecycle, from detection to investigation to the orchestration of response… and beyond?SOAR is part of a larger set of SecOps capabilities Security teams now have one less standalone SOAR offering to choose from. This is detrimental in some ways since some practitioners prefer to use a separate, independent SOAR offering. They find the depth of available integrations to be more powerful and prefer a tool and the vendor behind it to be entirely focused on improving automation in the SOC. While standalone SOAR is becoming a rarity, SOAR still exists in many forms. There are benefits to having a security analytics platform that tightly integrates SIEM and SOAR. A combined tool can help you implement more seamless automation and streamline the entirety of the incident response lifecycle in one place. It also gives you one less vendor to manage, and data from the latest Forrester Analytics Business Technographics® Security Survey shows that security pros are looking to consolidate security tooling. 

    Buying SOAR as a standalone versus as part of a broader platform is the classic best-of-breed versus best-of-suite debate. The tricky part, though, is that SOAR is the supporting act, not the headliner. This means things get a little more complicated — as you will find in the flavors of SOAR below.Flavors of SOAR
    Forrester
    Consider the different flavors of SOAR and the risks of each:  Integrated security analytics platforms can provide tight integration and a simpler user experience. The main challenge with these vendors is ensuring that they stay cutting-edge — big suites of products tend to lead to complacency on innovation and bloat. Security analytics portfolios try to balance the best of what standalone SOAR offers while providing that integration (but this makes them more likely to fail at both as a jack of all trades). If these vendors struggle with one element of their SOAR offering, it’s more likely to be the integrations with other vendors than their own tools. SOAR + TIP + etc. vendors, or those with other additional areas of focus, bank on the fusion between SOAR and their other adjacent offerings. This can be unique and provides them a way of staying independent while still gaining ground in different markets. Combining SOAR and TIP capabilities also helps to operationalize threat intelligence in the SOC. Standalone SOAR can have a great depth of integrations because of its independence and its singular focus on building better automation for the SOC. Even if you choose a standalone SOAR, however, it may not be standalone for much longer. This post was written by Analyst Allie Mellen and it originally appeared here.  More

  • in

    Pandemic cravings: What robots delivered in 2021 by region

    Starship Robotics
    Look, it was a weird year. We were supposed to be emerging from a socio-political cataclysm, supposed to be getting back on track, but in a lot of ways, the drudgery just kept on keeping on. This is why it makes a lot of sense that comfort food ranked high on the list of items folks ordered from shops and restaurants that were then delivered by an emergent class of autonomous delivery robots.

    Innovation

    If you live in a city or dense suburb and don’t have delivery robots in your area yet, brace yourself: They’re coming. Delivery bots are designed to reduce car traffic and increase efficiency for last-mile urban delivery. They’re also pretty amazing data collection devices, which advocates say will help streamline operations and reduce waste but will also lead to profound privacy worries in the near future. One of the leading providers in the field, Starship Technologies, recently released its 2021 Robot Wrap Up, “highlighting the most popular and quirky requests and orders” that its fleet of more than 1,000 robots worldwide have received in the past year.And yeah … comfort food. In the U.S., that meant things like boneless chicken wings, which ranked most popular in the western states, and curly fries, which topped orders in the midwest. Chicken fingers were popular in the east and the south. Interestingly, pizza didn’t make the cut on Starship’s most-ordered list, which almost certainly says more about the distribution of the technology than market trends. In fact, pizza is having its own automation makeover, and owing to the fact that pizza preparation and handling is distinct from that of many other fast foods, which are fried, autonomous pizza making and delivering technology seems endemic to the sector rather generalized (see: 2022 will be the year of the pizza-making robot).When you look internationally, the picture starts to change (and the U.S., perhaps not surprisingly, doesn’t come out looking very healthy). Among British consumers, the most popular items delivered by Starship’s fleet were breakfast and included bread, eggs, and bananas. Bananas!One of the primary markets for Starship’s robots has been college campuses. That’s because local regulations are still a patchwork of inconsistent or non-existent guidelines governing the use of robots on public streets. Colleges, however, are contained ecosystems often with their own governing authorities. It was, of course, a tough year for college kids, who once again saw much of campus life cancelled among quarantine orders. Perhaps that’s why any individual’s record for the most orders goes to an unidentified individual at Arizona State University, who placed 230 orders with Starship during 2021. Go Sun Devils…

    Oregon State students claim the dubious record of having the most late-night orders. (Study sessions, maybe?) And parents of Northern Arizona University students will be proud to know that their students placed the most early morning orders.What does all this data tell us about robot delivery now and in the future? Not much, honestly. Though Starship has the most widely distributed delivery fleet, the footprint is still fairly small. But it is growing, and the volume is now becoming hard to ignore. Starship robots travelled, in aggregate, more than three million miles making deliveries in 2021, which the company proudly boasts is 13 trips to the Moon. That accounts for 100,000 road crossings every day and, over the lifetime of the company, which was founded in 2014, more than two million commercial deliveries globally.Now that 2022 is upon us, with a fresh wave of news about a new variant and unseen dimensions of unrest and chaos, it’s a safe bet we’ll see another important growth milestone for autonomous delivery. Comfort food anyone? More

  • in

    NoReboot attack fakes iOS phone shutdown to spy on you

    A new technique that fakes iPhone shutdowns to perform surveillance has been published by researchers. 

    ZDNet Recommends

    Best security key 2021

    While robust passwords go a long way to securing your valuable online accounts, hardware-based two-factor authentication takes that security to the next level.

    Read More

    Dubbed “NoReboot,” ZecOps’ proof-of-concept (PoC) attack is described as a persistence method that can circumvent the normal practice of restarting a device to clear malicious activity from memory. Making its debut with an analysis and a public GitHub repository this week, ZecOps said that the NoReboot Trojan simulates a true shutdown while providing a cover for the malware to operate — which could include the covert hijacking of microphone and camera capabilities to spy on a handset owner.  “The user cannot feel a difference between a real shutdown and a “fake shutdown,” the researchers say. “There is no user interface or any button feedback until the user turns the phone back “on”.”The technique takes over the expected shutdown event by injecting code into three daemons: InCallService, SpringBoard, and backboardd.  When an iPhone is turned off, there are physical indicators that this has been completed successfully, such as a ring or sound, vibration, and the Apple logo appearing onscreen — but by disabling “physical feedback,” the malware could create the appearance of a shutdown while a live connection to an operator is maintained. 
    ZecOps
    “When you slide to power off, it is actually a system application /Applications/InCallService.app sending a shutdown signal to SpringBoard, which is a daemon that is responsible for the majority of the UI interaction,” the researchers explained. “We managed to hijack the signal by hooking the Objective-C method -[FBSSystemService shutdownWithOptions:]. Now instead of sending a shutdown signal to SpringBoard, it will notify both SpringBoard and backboardd to trigger the code we injected into them.”

    The spinning wheel indicating a shutdown process can then be hijacked via backboardd and the SpringBoard function can both be forced to exit and blocked from restarting again. ZecOps said that by taking over SpringBoard, a target iPhone can “look and feel” like it is not turned on, which is the “perfect disguise for the purpose of mimicking a fake power off.” Users, however, still have the option of a forced restart. This is where tampering with backboardd further comes in — by monitoring user input, including how long buttons are held, a reboot can be simulated just before a true restart takes place, such as by displaying the Apple logo early.  “Stopping users from manually restarting an infected device by making them believe they have successfully done so is a notable malware persistence technique,” Malwarebytes commented. “On top of that, human deception is involved: Just when you thought it’s gone, it still pretty much there.” As the technique focuses on tricking users rather than vulnerabilities or bugs in the iOS platform, this is not something that can be fixed with a patch. ZecOps says that the NoReboot method impacts all versions of iOS and only hardware indicators could help in detecting this form of attack technique.  A video demonstration can be found below.

    [embedded content]

    Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Chinese scientist pleads guilty to stealing US agricultural tech

    A Chinese national has pleaded guilty to the theft of agricultural secrets from the US, intended to reach the hands of scientists across the pond. 

    Xiang Haitao, formerly living in Chesterfield, Missouri, assumed a post at Monsanto and its subsidiary, The Climate Corporation, between 2008 and 2017, the US Department of Justice (DoJ) said on Thursday. Monsanto and The Climate Corporation developed an online platform for farmers to manage field and yield information in a bid to improve land productivity. One aspect of this technology was an algorithm called the Nutrient Optimizer, which US prosecutors say was considered “a valuable trade secret and their intellectual property.” According to the DoJ, the former employee stole this information “for the purpose of benefitting a foreign government, namely the People’s Republic of China.” In June 2017, Xiang left these companies and boarded a flight back to China a day after. The 44-year-old drew the attention of airport officials who conducted a search – but it was not until later that investigators found copies of the Nutrient Optimizer stored on his electronic devices.  Xiang was still able to leave the United States and began working for the Chinese Academy of Science’s Institute of Soil Science.  However, during a return trip to the US, Xiang was arrested and charged. The Chinese national submitted to the charge of conspiracy to commit economic espionage and faces up to 15 years behind bars, a maximum of three years supervised release – and a fine of up to $5 million. 

    Sentencing is due to take place on April 7.  “Mr. Xiang used his insider status at a major international company to steal valuable trade secrets for use in his native China,” commented US Attorney Sayler Fleming for the Eastern District of Missouri. “We cannot allow US citizens or foreign nationals to hand sensitive business information over to competitors in other countries, and we will continue our vigorous criminal enforcement of economic espionage and trade secret laws.” Monsanto, meanwhile, pleaded guilty in December to 30 ‘environmental crimes,’ including the illegal use of a banned pesticide in Hawaii. The plea agreement includes a fine of $12 million. Bayer closed the acquisition of Monsanto in 2018 and is now facing a potential class-action lawsuit from investors and a demand of $2.5 billion over claims of failed due diligence.  Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More