More stories

  • in

    Robot usage is soaring during pandemic

    As Q2 data begins to emerge, we’re seeing a notable uptick in robotics usage. The trend seems widespread but is nowhere as apparent as with cleaning robots, which have become vital tools in a number of sectors during the pandemic.
    The pandemic has created ideal conditions for automation adoption. The robotics sector in general has been maturing with a number of sectors like logistics, retail, delivery, and inspection already partially automated. Longterm plans to move toward fuller automation stacks have been easy to fast track thanks to a new focus on sanitation and a wariness among consumers of unnecessary contact and handling in the supply chain. Temporary operations slowdowns and shutdowns have also provided a window for companies to retool their operations. 
    “As companies reconfigure their workplaces and factories,” the Financial Times recently reported, “those with the necessary financial resources are likely to go ahead with long-planned investment into new machinery and more automated ways of working.”
    A company called Brain Corp, which creates core technology for autonomous robots, such as floor scrubbers, has released internal data tracking the rise in fleet utilization. According to the company, median robotic usage among retailers in US locations rose by 13.8% during Q1 of 2020, compared to the same period last year, and jumped by 24% during Q2 of 2020.
    “We have seen a sharp increase in usage and adoption over the last quarter, as grocers and retailers try to adjust to a constant state of clean due to the health crisis,” explains Dr. Eugene Izhikevich, CEO at Brain Corp. “We are grateful to be working with our manufacturing partners to provide value to retailers during these challenging times. We are also proud to have broken through the 2 million autonomous hours mark, which reflects the unmatched performance and safety of our technology.”

    The pandemic may also be increasing the move toward AI-related technologies like robotic process automation and software automation.
    “When things go back to normal, companies will see the overall benefit of implementing robotics very quickly, and they will take more of a serious measure to expand their RPA implementation,” Rinat Malik, former RPA implementation specialist at BMW Financial Services, told the PEX Network in a recent Q&A. “Within a month or two of lockdown ending there will be a big boost of RPA.” More

  • in

    AutoX gets coveted California autonomous driving permit

    It would be tough to name another self-driving car company developing technology or moving into new markets at the speed of AutoX. The company’s latest achievement is a coveted Driverless Permit granted by the California Department of Motor Vehicles (DMV), making AutoX just the third company to receive a Driverless Permit in the state of California, according to the California DMV. 

    Essentially the permit allows AutoX to test its cars without a safety driver on public roads within a confined testing zone. The permit also allows for testing with passengers, and AutoX will focus its operations on surface roads in San Jose with a speed limit of up to 45mph.
    I’ve written that AutoX is the unlikely little company that could. Since coming out of stealth in 2018 with an announcement that it would begin delivering groceries via its L4 autonomous vehicles, then a major milestone, the company has battled head to head with much larger rival Waymo for a series of L4 autonomy firsts. 
    Earlier this year, AutoX became the first fully-autonomous robotaxi provider for the Asian market. 
    Bootstrapped through its early years, and founded by a Princeton Professor named Jianxiong Xiao, AutoX has a founding mission to democratize autonomy, a lofty goal that the company nonetheless seems to be on track to fulfill. Jianxiong’s value proposition in a crowded market has been an evolving hardware stack that in the early days consisted of nothing more visible spectrum cameras paired with AI, which he argued would be enough for safe L4 autonomous driving. His company has operated in stealth for most of its tenure, although a California DMV filing to test self-driving vehicles put him on the industry’s radar.

    The latest AutoX deployments use lidar. Robosense and DJI are the hardware providers for AutoX’s laser configuration, an automotive-grade MEMS-based Lidar M1 by Robosense. DJI is a world-leading manufacturer of commercial unmanned aerial vehicles but has expanded to the lidar market. 
    This is far from the company’s only foray onto California roads, which it has been driving on since early 2017 when it received permission to test its cars with safety drivers. Working with the California Public Utilities Commission, AutoX obtained the second Autonomous Vehicle Pilot Permit and launched its xTaxi pilot programs to the public.  AutoX currently operates robotaxi fleets in two Chinese megacities, Shenzhen and Shanghai. The company says the extremely diverse data and experience operating in challenging urban centers in China has given it valuable advantages in perfecting its technology faster in global expansion.  More

  • in

    Shiny objects foil robots, but RGB-D holds the key

    Who doesn’t love shiny things? Well … robots for one. Same goes for transparent objects.
    At least, that’s long been the case. Machine vision has stumbled when it comes to shiny or reflective surfaces, and that’s limited use cases for automation even as advances in the field push robots into more and more new spaces.
    Now, researchers at robotics powerhouse Carnegie Mellon report success with a new technique to identify and grasp objects with troublesome surfaces. Rather than rely on expensive new sensor technology or intensive modeling and training via AI, the system instead goes back to basics, relying on a simple color camera.
    To understand why, it’s necessary to understand how robots currently sense objects prior to grasping. Cutting edge computer vision systems for pick-and-place applications often rely on infrared cameras, which are great for sensing and precisely measuring the depth of an object — useful data for a robot devising a grasping strategy — but fall short when it comes to visual quirks like transparency. Infrared light passes right through clear objects and is reflected and scattered by reflective surfaces.
    Color cameras, however, can detect both. Just look at any color photo and you’ll clearly discern a glass on a table or a shiny metal railing, each with lots of rich detail. That was the vital clue. The CMU researchers built on this observation and developed a color camera system capable of recognizing shapes using color and, crucially, sensing transparent or reflective surfaces. 

    “We do sometimes miss,” David Held, an assistant professor in CMU’s Robotics Institute, acknowledged, “but for the most part it did a pretty good job, much better than any previous system for grasping transparent or reflective objects.”
    That the solution is low-cost and the sensors battle tested gives it a tremendous leg up when it comes to potential for adoption. The researchers point out that other attempts at robotic grasping of transparent objects have relied on training systems based on trial and error or on expensive human labeling of objects.
    In the end, it’s the end, it’s not new sensors, but new strategies to use them that may give robots the powers they need to function in everyday life.  More

  • in

    Robots with xenon ray blast viruses, bacteria

    The pool in our building in Los Angeles is open, but only by reservation and under strictly enforced social distancing guidelines. Between visitors, an attendant armed with a bottle of disinfecting spray retraces guests’ steps spritzing any surfacing they might have touched.
    Clean is the new norm. Hospital-level sanitation is now expected in spaces like retail, restaurants, and, yup, pools, so it’s no surprise that the automation sector is smelling opportunity. I thought of the poolside attendant — a very nice guy who seems to do a wonderful job, for what it’s worth — when I got briefed on a new disinfecting robot hitting the market, this one from Fetch Robotics, which offers flexible autonomous mobile robots traditionally for logistics and inventory applications. 
    Fetch’s new disinfecting robot joins a growing field that includes companies like UVD Robots. It’s a collaboration with automated packaging solutions supplier Piedmont National, and it’s meant to target a niche both companies are familiar with, like high-traffic warehouse facilities, as well as retail stores, office spaces and, hospital rooms. The robot, named the SmartGuardUV, uses pulsed Xenon UV lamp technology and adds advanced disinfecting reporting courtesy of the Piedmont 4Site cloud analytics platform. The result is a completely autonomous, broad spectrum UV disinfection robot that purportedly eliminates up to 99.9% of viruses and bacteria with UV-C, UV-B, and UV-A as well as reports on the results of the disinfection.
    “The facilities best prepared to protect workers and customers from COVID-19 are taking extreme precautions when it comes to sanitization, and are placing their trust in automated solutions that can be deployed at any time,” said Fetch Robotics Chief Product Officer Stefan Nusser. “Companies of every size recognize the need for reliable sanitization procedures, and SmartGuardUV provides reliable protection at every hour of the day, without taking employees away from their already existing job responsibilities.” 
    This is another example of the surge in automation we’re seeing as the pandemic grinds on. Robots are one of the clear winners of this losing situation, both in how they’re received by the public and in continuing strong interest from investors and technology officers. 

    Using Fetch’s cloud-based robotics architecture, SmartGuardUV can autonomously map and navigate throughout a facility, enter a desired space, activate the pulsed UV light for targeted, comprehensive disinfection, reposition in the space for maximum coverage, disinfect, and then move to the next space without any human intervention. According to the company, the robot’s built-in motion sensor for automatic shut off prevents unnecessary UV exposure, and facility managers can customize the AMR’s schedules and disinfecting paths, even remotely, as facility needs change.
    “Legacy UV fluorescent-based solutions cannot target disinfection and can only operate for 2 to 2.5 hours on a single charge as the UV lamps have to stay illuminated for a significant amount of time draining battery life,” said John Garlock, CSO of Piedmont National. “The PURO light engines on the SmartGuardUV AMR precisely target UV rays at high touch surfaces so are able to operate for 8 to 10 hours on a single charge. This combination of targeted disinfection within a space and longer operational time results in an autonomous disinfection robot that is much more effective than autonomous robots that use legacy mercury-based UV lamps.” 
    Fetch previously released the Breezy One, a chemical disinfection AMR designed for large spaces over 100,000 square feet, signaling a strong push into disinfecting space. More

  • in

    Big backing to pair doctors with AI-assist technology

    Mass Communication Specialist 2n
    Back in May, I asked whether AI should assist in surgical decision-making. Following a recent venture round, investors seem to be voting yes as a company called Activ Surgical announces a $15 million raise. 
    Activ rolled out an AI/ML platform called ActivEdge, which is designed to provide real-time intelligence and visualization to surgeons. The platform and its associated products will be initially available in the U.S. market, with plans to commercialize globally in 2021. 
    The goal of integrating AI decision making with human surgical capabilities is to reduce medical errors. Preventable surgical errors cost U.S. health care $36B+ and cause 400k+ deaths every year. Developers like Activ are trying to improve that performance by harnessing the best of human decision making and coupling it with AI/ML tools designed to assist surgeons. 
    Activ’s platform is hardware-agnostic and integrates computer vision, artificial intelligence, and robotics to enable existing surgical systems like scopes or robots to visualize and track tissue better than humans could. 
    “We believe Activ Surgical’s platform and technology are poised to transform the surgical space by enabling existing surgical systems and robots to visualize what surgeons can’t and guide them, thus making surgeries safer, reducing errors and improving patient outcomes,” says Ameena El-Bibany, principal, ARTIS Ventures, which led the recent round. “They have achieved major milestones in just a few years, advancing their technology from early proof of concept to their first market-ready product to be launched with early access partners in 2021. We look forward to partnering with the company to bring impact to both surgeons and patients.”

    To date, Activ Surgical has raised a total of $32 million. It will use proceeds from the latest financing round to accelerate U.S. commercialization and European expansion efforts of ActivEdge. More

  • in

    A robot walks into White Castle …

    Did you know White Castle was the country’s first fast-food hamburger chain, getting its start 1921? The quick-serve restaurant has kept pretty true to its roots, which makes it an unlikely partner for a company that makes robots.
    Nevertheless, Miso Robotics, which makes Flippy the burger-flipping robot, has announced a planned pilot with White Castle to test the adoption of AI and robotics in legacy fast food.
    It’s propitious time to be selling automation in the restaurant industry. Coronavirus shutdowns and in-restaurant restrictions are seemingly making the public more receptive to automation in general. The restaurant industry has been particularly hard-hit during the pandemic, and that’s forcing a hard look at ROI arguments put forth by automation firms. 
    But Flippy’s big selling point right now is that it can reduce human contact with food during the cooking process. It also potentially will include quality control since the system relies on sensors, intelligent monitoring, and anticipated kitchen needs to keep food temperatures consistent. White Castle will be bringing the newest version of Flippy, Robot-on-a-Rail (ROAR), into some of its kitchens for testing and future integration. Among the metrics that will be measured are production speeds, labor allocation, and health and safety benefits.
    “White Castle is an industry innovator, and we take a great amount of pride in our history – never forgetting about the future ahead,” says Lisa Ingram, 4th generation family leader and CEO of White Castle. “With 100 years of quick service success, the time has never been more perfect to envision what the next century of White Castle and the restaurant industry looks like. Miso Robotics understood where we could improve and stay true to White Castle’s brand of taste, innovation and best-in-class dining. A great customer and employee experience is in our DNA, and we are thrilled to bring the future into our kitchen with solutions that will transform the industry and make the White Castle experience all that it can be for generations to come.”

    The Flippy ROAR will deploy later this fall. More

  • in

    How to choose a robot for your company

    Moxi by Diligent Robotics.
    Diligent Robotics
    Robots are spreading to a variety of industries and sales in the sector are booming. The International Federation of Robotics cites worldwide robotics installations surpassing 400,000 in 2018, the latest year for which data has been released. That figure represents an average of 23% growth each year from 2013. 
    Robots are now making meaningful contributions in wildly disparate industries like fashion, food, small components manufacturing, energy, and construction, adding to increasing penetration of automation technologies into already robotics-friendly sectors like logistics and durable goods manufacturing.
    This proliferation in use cases has come with an explosion in vendors and automation technology categories, and it can be tricky for CTOs and other stakeholders to know where to start. For a general overview of what kinds of robots are becoming commonplace in the enterprise, a great first step is our executive guide on the subject: Robotics in Business: Everything Humans Need To Know.
    But there’s another step in the process, which is identifying a suitable vendor. According to Josh Kern, a lead analyst at Lux Research, which recently released a framework for choosing robotic vendors that I found insightful, “Companies should follow a structured framework to identify a robotics vendor. If an off-the-shelf solution doesn’t exist for a certain use case, companies should evaluate vendors on their technical capabilities and customization options.” 
    Those criteria are diverse, but in general come down to issues that will be defined by the task, such as how precise a robot needs to be (think biochemistry compared to welding) and how accessible the interface to someone who doesn’t necessarily have an engineering degree. But another big determiner is customization needs. Will an off-the-shelf platform work, or does your job or industry require a purpose-built machine?

    Why consider a robotic solution?
    There are lots of reasons a company might entertain automating processes with robots. According to Kern, the main reason is a labor shortage. Prior to COVID-19-related slowdowns, a competitive labor landscape and rising costs of living in many countries around the globe made hiring tough for skilled and unskilled positions alike. Automation, which often promises ROI efficiencies over time, particularly when it comes to repeatable tasks, is an attractive solution.  
    “Robots can save money over time, not just by directly eliminating human labor, but by cutting out worker training and turnover,” according to the Lux report for which Kern served as lead. “Most companies turn to automation and robotic solutions to deal with labor shortages, which is common in industries with repetitive tasks that have a high employee turnover rate. Companies also frequently use robots to automate dangerous tasks, keeping their employees out of harm’s way.”
    Post-COVID-19, there are also considerations like sanitation and worker volatility. As I’ve written, the perception of automation is changing almost overnight. Where robots were once, very recently, associated primarily with lost jobs, there’s been a new spin in the industry to tout automation solutions as commonsense in a world where workers are risking infection when they show up at physical locations. 
    A framework for choosing a vendor
    What I like about the Lux Research framework is that it splits the task into three primary considerations: what the robot does, how sophisticated it needs to be, and how customized a solution is necessary.
    The first branch on our decision tree deals with what the robot does. In Lux’s example, the functions are split into moving, picking, assembling, and inspecting. Those are pretty good general buckets, although of course it’s possible to get more granular than that. In the chart below, a CTO or operations lead would start by identifying the task they’re thinking of automating and choosing the correct function category.
    Next comes the autonomy maturity level, which you might think of as the sophistication of the installation. For picking and placing applications, Lux envisions Level 3 autonomy maturity thusly: “Collaborative robot arms are used to pick an object of a given size/shape/ material with a fully predictable pickup location.” Move up to Level 4 and you jump in sophistication: “Robot arms can pick various objects never seen before and/or in an unexpected pickup location or orientation.” And finally, at Level 5, “Robot arms can pick objects in extremely unstructured environments; some may be able to intentionally manipulate the orientation of objects.”

    Lux’s framework, conceived literally as a chart, funnels prospective adopters to pre-populated firms, which is a good start but certainly not comprehensive. Even so, conceptualizing the decision-making process this way is extremely useful when conducting a larger search for a prospective automation vendor.
    The final variable is the level of customization that might be involved, which generally correlates to size of the investment in both time and money. Robotically mature industries like logistics will have a significant number of off-the-shelf platforms generally available. By contrast, industries like chemicals and energy will need to look to specialty contractors and maybe even research organizations, which may be willing to partner on automation challenges that have groundbreaking commercial potential.”
    “Research organizations and robotics contractors offer the highest precision and most autonomous robots,” says Kern, whereas “off-the-shelf platforms can be deployed quickly, but are generally only used for a single use case.”  More

  • in

    3D capture with normal smartphone cameras

    Fans of professional sports will be familiar with so-called 4D visualizations, which offer multiple vantages of a single scene. A receiver catches a pass, but was he inbounds? The live producer and announcers will run a replay, shifting the POV so viewers have the sensation of zooming around to the front of the action to have a look.
    There’s a whole lot of technology behind that seemingly simple maneuver, including a careful orchestration of fixed-point cameras that are positioned to enable near real-time video stitching. 
    What if video from smartphone cameras aimed at a scene could be employed the same way?
    That’s the question that driving researchers at Carnegie Mellon University’s Robotics Institute, who have now developed a way to combine iPhone videos that allow viewers to watch an event from various angles. The virtualization technology has clear applications in mixed reality, including editing out objects that obscure line of sight or adding or deleting people from a scene.
    “We are only limited by the number of cameras,” explains Aayush Bansal, a Ph.D. student in CMU’s Robotics Institute. The CMU technique has been demonstrated with as many as 15 camera feeds at a time.

    The demonstration points to a democratization of virtualized reality, which currently is the purview of expensive studios and live events that employ dozens of cameras coordinated to capture every angle. But just like the venerable disposal camera made everyone a wedding photographer, smartphones may soon be employed to crowdsourced so-called 4D visualizations of gatherings. Given that pulling out your phone and taking video at events like weddings and parties is commonplace already, the technology has the benefit of piggybacking off of conditioned behavior.
    “The point of using iPhones was to show that anyone can use this system,” Bansal said. “The world is our studio.”
    The challenge for the CMU researchers was to use unpredictably aimed videos to stitch together 3D scenes, which has never been done. The team used what are called convolutional neural nets, which use deep learning and robust visual data analysis. The convolutional neural nets identify common visual data across multiple feeds and works backwards to stitch the video together. 
    The National Science Foundation, Office of Naval Research, and Qualcomm supported the research, which was conducted by Bansal and faculty members of the CMU Robotics Institute and was recently presented at the Computer Vision and Pattern Recognition conference last month.
    Fittingly, that conference was held virtually. More