More stories

  • in

    Hey drone industry: Quit griping, it's time to work with the FAA

    American Robotics Scout drone ready for deployment.
    American Robotics

    The commercial drone industry is expected to grow at a compound annual growth rate of 57% from 2021 to 2028 as a result of the need for better data and analytics that only drones provide. In order for drones to reach their full potential, drone developers must work with the Federal Aviation Administration (FAA) to manufacture devices that can safely and successfully operate under Aviation Rulemaking Committee (ARC) and FAA guidelines. That’s the clarion call of American Robotics, the first company approved by the FAA to operate automated drones without humans on-site, which was recently selected to participate on the FAA’s Unmanned Aircraft Systems (UAS) Beyond-Visual-Line-of-Sight (BVLOS) Aviation Rulemaking Committee (ARC) to advance BLOVS drone operations. In the eyes of co-founder and CEO Reese Mozer, the FAA’s approach to BLOVS flight for commercial drones will dictate the state of the drone industry for years to come, and it’s up to the industry to do all it can to work in lockstep with the regulator. I sat down with Mozer about why the drone sector needs to work with the FAA and what that means for the future of drone delivery and other BVLOS applications.GN: What’s the FAA’s current policy on BVLOS, and what are the FAA’s primary concerns when it comes to BVLOS?Reese Mozer: The FAA’s mission and responsibility is the safety of the National Airspace System (NAS), including people and property in both the skies and on the ground. Prior to American Robotics 2021 waiver and exemption, no company had demonstrated to the FAA safe operation without human visual observers (VOs) on-site. The reasons for this are numerous and complex and are both technological and cultural. The short explanation is that humans have been a constant presence during flight for the past hundred years, and ultimately, the primary failsafe if anything goes wrong. Shifting more of this responsibility to software and hardware required a series of technology innovations to be developed, tested, and adequately communicated to regulators at the FAA.For the past five years, American Robotics has been developing a suite of proprietary technologies explicitly designed to produce the industry-leading solution for safe automated flight. We designed these technologies in concert with a low-risk Concept of Operations (CONOPS) and conducted extensive testing and evaluation as part of a long-term regulatory strategy to prove our system’s safety. For example, the Scout System incorporates multiple novel risk mitigations, including proprietary detect-and-avoid (DAA) sensors and algorithms, advanced automated system diagnostics and failsafes, automated pre-flight checks, and automated flight path management. If anything were to deviate from the expected, safe operation plan, our drone systems take immediate action to correct, such as altering flight course and returning to the base station. By developing a layered, redundant system of safety that includes these proprietary technical and operational risk mitigations, we have proven that its drone-based aerial intelligence platform operates safely in the NAS, even when it conducts flights beyond-visual-line-of-sight (BVLOS) of both the operator as well as any humans on-site.

    GN: How do you hope the rulemaking will change through the BVLOS ARC?Reese Mozer: Our hope is that the recommendations from the BVLOS ARC will encourage the FAA to more expeditiously authorize expanded BVLOS operations on a national scale, allowing industry to meet the significant demand for automated drone-based inspection. American Robotics and others in the industry have successfully demonstrated that drones can be operated to a very high threshold of safety in the national airspace and can perform missions that are vital to society without endangering other users of the airspace or the general public. Existing regulatory pathways such as waivers and exemptions typically lack the efficiency and speed desired by industry and are often cost-prohibitive for many smaller companies to obtain. Similarly, existing Type Certification (TC) processes were designed to ensure the safety of manned aircraft operations, and applying the existing processes to drones is generally not effective due to the many sizes, technology, and risk differences between drones and manned aircraft. Within the BVLOS ARC, the drone industry has proposed streamlined means of certifying drone technology and assessing the real-world risks that BVLOS operations of drones pose. New rulemaking based on these proposals would enable expanded BVLOS operations in a safe and scalable manner while ensuring the safety of all operators within the NAS. It should be noted; however, the FAA’s stated timeline for implementing such rulemaking is 3-5 years. Thus, the existing path of waivers and exemptions taken by American Robotics is likely to persist until then.GN: Why will BVLOS take drones to a new dimension? Why is this such a critical milestone?Reese Mozer: “True” BVLOS, i.e. that where neither pilot nor visual observers (VO) are required, is critical to unlocking the full potential of the commercial drone market. The economics behind paying for a VO or pilot on the ground to continuously monitor a drone flight simply does not make sense and have significantly hampered commercial users’ ability to justify building out a drone program. It’s important to remember that flying a drone once or twice a year has little to no value for the vast majority of commercial use cases. Typically, to see the benefits of drone-based data collection, flights need to be conducted multiple times per day, everyday, indefinitely. This frequency allows drones to cover enough area, survey at the proper resolution, and detect problems when they occur. Today, the average hourly rate of hiring a drone pilot in the U.S. is about $150 and can get as high as $500/hour. Thus, overcoming the human costs associated with commercial drone use has been one of the biggest hindrances to the market and has impacted the viability and implementation of this technology on a mass scale. American Robotics’ leadership in expanding automated BVLOS operations represents a critical inflection point in the aviation, drone, and data worlds. As the first company to become approved by the FAA to operate in this manner, we have set the stage for the next generation of commercial drones. Autonomous operations enable the real-time digitization of physical assets and allow users in industrial markets to transform their monitoring, inspection, and maintenance operations. This technology represents the key to a new generation of industrial data that will bring about increased cost-efficiency, operational safety, and environmental sustainability. GN: What sectors are automated BVLOS particularly important to? Can you give some examples of how those sectors plan to use BVLOS?Reese Mozer: Automated operations, which are enabled by “true” BVLOS authorization, are required for 90% of the commercial drone market. An easy way to think about it is any use case that requires frequent inspection over the same area likely requires automated BVLOS to be practical. Example sectors include Oil & Gas, Bulk Materials & Mining, Rail, and Agriculture. Each has significant demands in terms of image resolution and frequency that can only be provided by automated BVLOS flight.Oil & GasThere are over 900,000 well pads and 500,000 miles of pipeline in the United States. Every inch of those assets needs to be continually monitored for defects and leaks to assure safety and reduce GHG emissions properly. Automated BVLOS operation is critical to enabling drones to perform these tasks on a regular basis properly. Stockpiles & MiningCurrent stockpile and mining inspections involve teams that manually estimate volumetrics, either with hand-held cameras or the naked eye, typically resulting in low-accuracy data. These incorrect measurements put a strain on operations and drastically reduced our visibility and control over the bulk materials supply chain. With automated BVLOS, we can generate a hyper-accurate volumetric analysis of stockpiles and mines every day, reducing the likelihood of global supply chain disruptions across a variety of industries.RailOver 140,000 miles of rail track in the United States require regular monitoring and inspection to assure safety. Common track defects include tie skew, tie clearance, and track impediments. Automated BVLOS allows for the scalable implementation of drones across the nation’s rail infrastructure, helping to reduce the odds of a train derailment and increasing the uptime of train systems. Agriculture To sustain the growing population, the world needs to produce 70% more food by 2050. At the same time, the average age of a farmer in the United States is 59 and growing, with fewer new entrants to the agricultural labor force each year. The result of these socio-economic factors is a requirement of increased technology and automation on the farm. There are over 900 million acres of farmland in the United States, and automated BVLOS operation is the only scalable way to monitor these acres by drone routinely.GN: Have developers been eager or reticent to work with the FAA? What should manufacturers be doing to help pave the way?Reese Mozer: The relationship between industry and the FAA has been evolving for the past 10 years. Early on, each party was very foreign to the other, with the drone industry being born from Silicon Valley-esque hacker roots and the FAA acting as the 100-year arbiter of manned flight. As a result, many developers either weren’t eager or didn’t understand how to work with the FAA in the early years of the drone industry. Recently, there have been significant and promising changes, but some still persist in that hesitant or unfamiliar mindset. I think an important fact for manufacturers to remember is that the FAA’s job is not to innovate, and it never will be. Their responsibility is to evaluate aviation technologies to assure the safety of the national airspace. If the industry wants to do something new, it is on our shoulders to develop, test, and prove that technology to our regulators.   More

  • in

    Are practical personal exoskeletons finally in reach?

    Wandercraft
    A company that develops self-balancing personal and therapeutic exoskeletons has just closed $45 million equity financing. The series C breathes fresh life into a technology class that’s been much hyped but has struggled to break out of niche markets.Founded in 2012, Wandercraft, based in Paris, has been around for a while, but it’s less known in the U.S. than rival Ekso Bionics, long the marquee player in the space. Like Ekso, Wandercraft was founded to create a mobility device to supplant wheelchairs for people suffering mobility issues. Also like Ekso, Wandercraft narrowed its focus with its first commercial product and is targeting the therapeutic healthcare market (Ekso has since branched into industrial markets, including auto manufacturing). “With the support of patients, medical professionals and the DeepTech community, Wandercraft’s team has created a unique technology that improves rehabilitation care and will soon enable people in wheelchairs to regain autonomy and improve their everyday health, says Matthieu Masselin CEO of Wandercraft.” Wandercraft’s autonomous walking exoskeleton, the first version of which is called Atalante, was commercialized in 2019 and is used by rehabilitation and neurological hospitals in Europe and North America. Atalante provides innovative care for many patients based on realistic, hands-free, crutch-free locomotion. However the company has a more ambitious personal exoskeleton in the works.The focus on the rehab market out of the gate reflects a hard reality for exoskeleton tech. While visions of providing a robust mobility device to those living with mobility issues is inspiring, the fact remains that wheelchairs are an effective, inexpensive, and widely distributed solution to a broad array of mobility issues. By contrast, robotic suits are comparatively expensive and can’t yet match the functionality wheelchairs offer, particularly in accessibility-conscious regions. That makes the market for a mobility-first device elusive, which explains the pivot to therapeutics, where exoskeletons can get wheelchair-bound patients up and walking around, which has tremendous physiological and recovery benefits. None of which is to say, of course, that the therapeutic market can’t serve as an important preliminary toe hold as prices for exoskeleton suits fall and the technology matures. That’s precisely what Wandercraft has done, and it believes the time for a personal device has come.To that end, most of this new round of financing will be used by Wandercraft to fulfil the company’s mission of “mobility for all” through the continued development, then launch, of the new Personal Exoskeleton for outdoor and home use. The funding will also allow Wandercraft to accelerate the deployment of Atalante, its pioneering CE marked rehabilitation exoskeleton, in the USA.

    “We are thrilled to lead this round of financing and to bring together these responsible investors in order to make the world better,” says Alan Quasha, Chairman and CEO of Quadrant Management, which participated in the round. “Wandercraft has developed the world’s most advanced technology in walk robotics and markets the first self-stabilized exoskeletons. We share Wandercraft’s ambition to provide a new solution for mobility, and to improve the health of millions of people using wheelchairs. We believe that they will transform mobility and become the leading player in the market.” Should Wandercraft succeed in successfully marketing a personal exoskeleton, the next few years will be an interesting bellwether for a technology that hasn’t yet lived up to its founding promise. More

  • in

    Robotaxis get new learning strategies to face “the edge”

    Motional
    Taxis are excellent candidates to become the go-to early use case for self-driving cars in the wild. But to get there, autonomous vehicle developers face a daunting challenge: equipping their cars to meet an array of scenarios than can’t be fully anticipated.AI and deep learning tools have been the secret sauce for self-driving car programs, endowing vehicles with the adaptability to face new challenges and learn from them. The most difficult of these challenges are what developer Motional calls “edge situations,” and when the goal is to build a safe robotaxi, identifying, and solving for these outliers is a technological imperative. To do this, Motional has developed its own Continuous Learning Framework, or CLF, that helps its vehicles get smarter with each mile they drive. While Motional’s new IONIQ 5 robotaxi runs on electricity, a spokesperson recently quipped that the CLF is powered by data: terabytes of data are collected every day by Motional’s vehicles mapping cities throughout the U.S. CLF works like a closed-loop flywheel: Each step in the process is important, and completing one step advances the next step forward. As the inflow of data coming from the company’s vehicles grows, the flywheel is expected to turn faster, making it easier to accelerate the pace of learning, solve for edge cases, map new ODDs, and expand into new markets. This machine learning-based system allows Motional to automatically improve performance as they collect more data, and it does this by specifically targeting the rare cases.For a deeper dive into how this CLF process and data helps Motional improve performance, I recently connected with Sammy Omari, VP of Engineering & Head Autonomy.GN: Tell us more about the Continuous Learning Framework (CLF) and why Motional developed it?Sammy Omari: At Motional, we’re developing level 4 robotaxis – autonomous vehicles that do not require a driver at the steering wheel. We will be deploying our robotaxis in major markets through our partnerships with ride-hailing networks. In order to achieve wide-scale level 4 deployments, our vehicles need to be able to recognize and safely navigate the many unpredictable and unusual road scenarios that human drivers also face. To reach this level of sophistication, we’ve developed a Continuous Learning Framework (CLF), which uses machine learning principles to make our AVs more experienced and safer with every mile they drive. Motional’s CLF is a revolutionary machine learning-based system that allows the team to automatically improve performance as we collect more data – and it does this by specifically mind for the rare situations that our vehicles might encounter. The CLF works like a closed-loop flywheel: each step in the process is important, and completing one step advances the next step forward. The entire system is powered by real-world data collected by our vehicles. 

    GN: What type of rare cases or outliers does Motional target through the CLF?Sammy Omari: The vast majority of the time, driving from one point to another is uneventful and relatively mundane. However, occasionally something unusual or “exciting” happens which involves a broad range of rare and unique driving experiences – called edge cases. These edge cases that Motional targets through CLF can include vehicles running red lights or violating the right of way, pedestrians darting into traffic, cyclists carrying surfboards on their backs, racing trikes, and other types of road users or behavior that we don’t encounter every day. GN: How does Motional utilize data gathered to help improve vehicle performance?Sammy Omari: Through the CLF, we’re able to find those rare edge cases in large volumes of data, create training data through automatic and manual data annotation, retain our machine learning models using that data, and then evaluate the updated models. Motional’s Scenario Search Engine allows developers to quickly search Motional’s vast drivelog database so they can introspect and visualize the results in seconds. This scenario query can run every time our autonomous vehicles are on the road collecting data. When we collect a sufficient number of samples and expand our training data, we can then retrain the machine learning models. We’ve built this machine learning-based flywheel that allows us to automatically improve performance as we collect more data – and it does this by specifically targeting the rare edge cases. As the inflow of data coming from our vehicles grows, the flywheel will turn faster, making it easier to accelerate the pace of learning, solve for edge cases, map new ODDs, and expand into new markets.GN: What does this mean for Motional’s future growth?Sammy Omari: Our innovative approach to machine learning helps us create smarter, safer autonomous vehicles that can navigate a wide range of complex environments. This allows us to deploy our vehicles in new markets faster, which ultimately will improve road safety more broadly.  More

  • in

    Can AI save amateur soccer from referee shortage?

    Closeup of a Soccer Player Legs in Action
    Getty Images/iStockphoto
    My wife plays in a women’s soccer league and the games tend to get pretty competitive. Good thing, then, that the league provides two referees for each game (and, um, maybe bad thing for my wife that those refs come packing yellow cards).But not every would be Mia Hamm is so lucky. Amateur soccer, and particularly youth soccer, is undergoing a major referee shortage owing in part to the pandemic and in part to the awful treatment refs tend to endure from cranky and over-agitated parents. The position tends to be low pay, and lots of former refs have simply had enough. Without some kind of ref presence, competitive soccer, which requires judgment calls best left to a neutral arbiter, is all but impossible. Can AI provide a kind last-ditch stopgap?A software startup called CoCoPIE, which brings AI capability to off-the-shelf mobile devices, believes it has the technology for just such an application. CoCoPIE recently announced a partnership with Cognizant to develop a set of super-resolution solutions to enhance end-users multimedia viewing experience, creating high-resolution images and videos. As part of the new partnership, Cognizant and CoCoPIE will work in tandem to achieve real-time processing by creating advanced deep neural network-based (DNN) solutions, which gives the technology, which can be used on consumer devices, interesting real-world reach.One possible application of CoCoPIE’s AI is to alleviate the current amateur soccer referee shortage. While the application may not be the best bet for determining if a slide tackle is fair play, it could be a great way to call out of bounds or offsides via mobile phone. That could help provide a crucial extra set of eyes, allowing a single human ref to focus on harder-to-automate fouls. With the edge-AI-referee, games for which there are few human refs could conceivably continue even when there is limited connectivity to the cloud and rules can be enforced simultaneously at multiple games.”This partnership with CoCoPIE will allow us to further enhance our customer’s mobile and edge device experience,” says James Rowley, Associate Director of Communications, Media and Technology at Cognizant Worldwide. “We look forward to leveraging CoCoPIE’s advanced AI software technology to provide real-time video stream processing while still maintaining high accuracy, ultimately providing our customers with notable performance gains and higher resolution images and videos.”The key here is that CoCoPIE’s tech gives smartphones the real-time and live AI capabilities previously possible only on servers or dedicated AI accelerators.

    “Through CoCoPIE’s proprietary technology on compression-compilation codesign, AI model optimization and automatic compiler-level code generation are optimized hand-in-hand,” according to the company.Technology is playing an increasingly important role in soccer at all levels. Goal-line and semi-autonomous offsides technology are being embraced by FIFA, for example. It may not be long before a version of the same capability is harnessed for amateur play using smart phones. More

  • in

    New Nuro robot? Think Storm Trooper meets Tayo the Bus

    Nuro
    As the pandemic roils on under a fresh wave of cases, the delivery robot wars continue to heat up. For technology developers, that means a new generation of robotic platforms built to capitalize on the growing recognition among major brands that autonomous delivery can increase efficiency and scale as more people dine at home more often. Nuro, a relative newcomer but already a leader in the space, just unveiled its third-generation zero-occupant autonomous delivery vehicle. Designed with delivery in mind, the new Nuro can carry more goods and enable more deliveries, with twice the cargo volume of the company’s current vehicle. The automotive production-grade vehicle will also feature modular inserts to customize storage and new temperature-controlled compartments to keep goods warm or cool, and safety enhancements to further improve safety for pedestrians outside the vehicle.It’s also pretty cute. If you have young kids you might be familiar with the cartoon Tayo the Little Bus. Mix in a little Storm Trooper attitude et voila, the new Nuro!Overall, the market for autonomous mobile robots (AMRs) and autonomous ground vehicles (AGVs) is forecasted to generate over $10bn by 2023 according to Interact Analysis, and that prediction relies on data from before the COVID-19 pandemic. Delivery robots in particular are quickly coming of age as COVID-19 lingers and touchless fulfillment becomes the norm. Sidestepping municipal red tape, enterprising companies like Nuro and Starship Technologies have launched pilot programs in controlled access spaces, such as college campuses.Nuro’s announcement follows $600 million Series D funding round closed in Q4 2021 and the AV leader’s long list of strategic partnerships with notable quick service and convenience brands. Notably, Nuro also formalized a commitment to leverage the company’s third-generation vehicle with long-standing partner and investor Kroger.”Five years ago, we set out to build an autonomous vehicle and delivery service designed to run errands, giving people back valuable time. Through our strategic partnerships with Domino’s, FedEx, Kroger, 7-Eleven and more, we are doing just that—improving road safety, sustainability and overall access to goods delivery,” said Dave Ferguson, Nuro co-founder and president. “With the introduction of our new flagship model and the ground-breaking of our new production facility—one of the industry’s first end-of-line manufacturing facilities in America—we are excited about the opportunity to fulfill our vision of improving everyday life through autonomous delivery at scale.”  The new model will be produced in a supplier partnership with BYD North America and completed at Nuro’s new $40 million end-of-line manufacturing facility and world-class closed-course test track in southern Nevada. The facilities have the capacity to manufacture and test tens of thousands of delivery vehicles per year to ensure they are ready for deployment. BYD North America—part of one of the largest OEM networks of electric vehicles in the world—will assemble globally sourced hardware components for the vehicle platforms; Nuro will complete the final steps of manufacturing and make the autonomous vehicles ready for deployment.

    “BYD attaches great importance to this collaboration with Nuro. As one of the world’s leading electric vehicle manufacturers and a turnkey solution provider, BYD will leverage the manufacturing capacity of its Lancaster facility to support Nuro and bring more jobs to California,” said Stella Li, Executive Vice President of BYD Co. Ltd. and President of BYD Motors Inc. “We are confident the development of this transformative autonomous delivery vehicle will create a better environment for us all.”There’s an interesting onshoring story happening around the automation sector — which is somewhat ironic given the association of robots with human layoffs. Nuro’s southern Nevada facilities are expected to be fully operational this year and will allow the company to manufacture its autonomous vehicles that are made in the USA. The facilities will create an initial 250 highly skilled career opportunities with long-term growth potential in the autonomous vehicle industry. Construction on the manufacturing facility officially kicked off in November 2021.   As American robotics developers look to make a case to local governments to let robots loose on city streets, where they will end up right at customer’s doorsteps, there’s every incentive to keep the technology, from development to manufacturing, onshore. Lingering supply chain issues have only emphasized this point. More

  • in

    Golden opportunity: Savvy business alliances propel the robotics sector

    6 River Systems
    The fulfillment economy has exploded during the pandemic, as has competition among automation technology providers, whose robotic technology is becoming critical during widespread labor shortages and ballooning demand.That’s the good news. The bad news, if you’re a robotics firm with a great product and opportunity as far as the horizon is that scaling hardware distribution, whether via direct sales or as-a-service, is extremely complex, typically takes massive capital outlays, and is fraught with the perils of miscalculation. What’s an emerging robotics firm to do?One model that’s becoming increasingly important for savvy businesses is to partner with an existing brand with a broad reach and pre-existing infrastructure. Examples include Kinova teaming up with Northrop Grumman to help distribute a small manipulator to existing customers and Robotiq partnering with Universal Robots on off-the-shelf robotic tooling.In the latest example, 6 River Systems, LLC, a leading fulfillment solutions provider, just announced a new initiative to support warehouse efficiencies by teaming up with Ricoh USA. Under the arrangement, RICOH’s service solutions business unit will augment 6 River Systems’ existing service team for its collaborative robots – called “Chucks,” solving for a crucial weakness in any young enterprise technology company’s bid to scale: giving customers an ample support network.”The demand for our automated retail solution is significant, especially with retailers continually looking for ways to get their products into consumers’ hands faster via seamless experiences,” says Eran Frenkel, Vice President of Technical Operations, 6 River Systems. “By partnering with Ricoh, we’re able to focus on making our solutions more widely available, which ultimately helps our customers quickly and efficiently meet their fulfillment goals.”Like other fulfillment automation providers, 6RS is on a bit of a tear during the pandemic. The company has provided solutions for major fulfillers and brands like Crocs, which implemented 6RS’ wall-to-wall fulfillment solution, including its collaborative mobile robot Chuck. As I wrote last year, Crocs has seen a 182% pick rate improvement with the 6RS system, illustrating a key reason fulfillers are turning to automation in such numbers. This increase in throughput was especially critical during the holiday peak season.In general, robots have become essential to scaling, and the solutions can now be brought online with unprecedented speed and minimal downtime. Not surprisingly as according to Statista, the global warehouse automation market is predicted to increase from $15 billion in 2019 to $30 billion by 2026.

    But the warehouse automation sector, while maturing rapidly in the Amazon Prime era, is still nascent, with many of the players less than a decade old. That’s a short time to build a massive global or even national distribution and support infrastructure. Collaborating seems like a key to efficiently do just that.”Our collaboration with 6 River Systems is a prime example of how our stable and trusted infrastructure – coupled with a team of more than 10,000 service delivery professionals supporting and maintaining more than one million devices across the U.S. – helps solve our customers’ problems,” says Jim Kirby, Vice President, Service Advantage, Ricoh USA, Inc. “Together, we are addressing some of the biggest challenges and opportunities in retail today including supply chain operational efficiency such as retail and warehouse automation. By expertly assisting with service and support for companies like 6 River Systems, we are helping them maintain focus on what matters most – innovation that solves supply chain hurdles and moves business forward.”It’s a great example of how smart robotics firms are taking advantage of the growth opportunities of 2022 and beyond through effective collaborations designed to scale at speed. More

  • in

    Is LiDAR on its way out? The business case for saying goodbye

    Pixabay

    Among the deluge of robotics predictions you’re bound to encounter this year, there’s one you should pay particular attention to: The way robots “see” is fundamentally changing, and that’s going to have a huge impact on the utility cost and proliferation of robotic systems.Of course, it’s a bit of a mischaracterization to talk about robots “seeing,” or at least a reductive shorthand for a complex interplay of software and hardware that’s allowing robots to do much more sophisticated sensing with much less costly equipment. Machine vision incorporates a variety of technologies and increasingly relies on software in the form of machine learning and AI to interpret and process data from 2D sensors that would have been unachievable even a short time ago.With this increasing reliance on software comes an interesting shift away from highly specialized sensors like LiDAR, long a staple for robots operating in semi-structured and unstructured environments. Robotics experts marrying the relationship between humans and AI software are coming to find that LiDAR isn’t actually necessary. Rather, machine vision is providing higher quality mappingat a more affordable cost, especially when it comes to indoor robotics and automation.See also: 2022: A major revolution in robotics.To learn more about the transformation underway, I connected with Rand Voorhies, CTO & co-founder at inVia Robotics, about machine vision, the future of automation, and whether LiDAR is still going to be a foundational sensor for robots in the years ahead.GN: Where have the advances come in machine vision, the sensors or the software?Rand Voorhies: While 2D imaging sensors have indeed seen constant continuous improvement, their resolution/noise/quality has rarely been a limiting factor to the widespread adoption of machine vision. While there have been several interesting sensor improvements in the past decade (such as polarization sensor arrays and plenoptic/light-field cameras), none have really gained traction, as the main strengths of machine vision sensors are their cost and ubiquity. The most groundbreaking advancement has really been along the software front through the advent of deep learning. Modern deep learning machine vision models seem like magic compared to the technology from ten years ago. Any teenager with a GPU can now download and run object recognition libraries that would have blown the top research labs out of the water ten years ago. The fact of the matter is that 2D imaging sensors capture significantly more data than a typical LiDAR sensor – you just have to know how to use it.

    While cutting-edge machine vision has been improving in leaps and bounds, other factors have also contributed to the adoption of even simpler machine vision techniques. The continual evolution of battery and motor technology has driven component costs down to the point where robotic systems can be produced that provide a very strong ROI to the end-user. Given a good ROI, customers (in our case, warehouse operators) are happy to annotate their environment with “fiducial” stickers. These stickers are almost like a cheat-code to robotics, as very inexpensive machine vision solutions can detect the position and orientation of a fiducial sticker with ultra-precision. By sticking these fiducials all over a warehouse, robots can easily build a map that allows them to localize themselves.GN: Can you give a little context on LiDAR adoption? Why has it become such a standardized sensing tool in autonomous mobility applications? What were the early hurdles to machine vision that led developers to LiDAR?Rand Voorhies: Machine vision has been used to guide robots since before LiDAR existed. LiDAR started gaining significant popularity in the early 2000s due to some groundbreaking academic research from Sebastian Thrun, Daphne Koller, Michael Montemerlo, Ben Wegbreit, and others that made processing data from these sensors feasible. That research and experience led to the dominance of the LiDAR-based Stanley autonomous vehicle in the DARPA Grand Challenge (led by Thrun), as well as to the founding of Velodyne (by David Hall, another Grand Challenge participant), which produces what many now consider to be the de-facto autonomous car sensor. The Challenge showed that LiDAR was finally a viable technology for fast-moving robots to navigate through unknown, cluttered environments at high speeds. Since then, there has been a huge increase in academic interest in improving algorithms for processing LiDAR sensor data, and there have been hundreds of papers published and PhDs minted on the topic. As a result, graduates have been pouring into the commercial space with heaps of academic LiDAR experience under their belt, ready to put theory to practice.In many cases, LiDAR has proven to be very much the right tool for the job. A dense 3D point cloud has long been the dream of roboticists and can make obstacle avoidance and pathfinding significantly easier, particularly in unknown dynamic environments. However, in some contexts, LiDAR is simply not the right tool for the job and can add unneeded complexity and expense to an otherwise simple solution. Determining when LiDAR is right and when it’s not is key to building robotic solutions that don’t just work — they also provide positive ROI to the customer.At the same time, machine vision has advanced as well. One of the early hurdles in machine vision can be understood with a simple question: “Am I looking at a large object that’s far away or a tiny object that’s up-close”? With traditional 2D vision, there was simply no way to differentiate. Even our brains can be fooled, as seen in funhouse perspective illusions. Modern approaches to machine vision use a wide range of approaches to overcome this, including:Estimating the distance of an object by understanding the larger context of the scene, e.g., I know my camera is 2m off the ground, and I understand that car’s tires are 1000 pixels along the street, so it must be 25m away.Building a 3D understanding of the scene by using two or more overlapping cameras (i.e., stereo vision).Building a 3D understanding of the scene by “feeling” how the camera has moved, e.g., with an IMU (inertial measurement unit – sort of like a robot’s inner ear) and correlating those movements with the changing images from the camera.Our own brains use all three of these techniques in concert to give us a rich understanding of the world around us that goes beyond simply building a 3D model.GN: Why is there a better technological case for machine vision over LiDAR for many robotics applications?Rand Voorhies: LiDAR is well suited for outdoor applications where there are a lot of unknowns and inconsistencies in terrain. That’s why it’s the best technology for self-driving cars. In indoor environments, machine vision makes the better technological case. As light photons are bouncing off objects within a warehouse, robots can easily get confused under the direction of LiDAR. They have a difficult time differentiating, for example, a box of inventory from a rack of inventory — both are just objects to them. When the robots are deep in the aisles of large warehouses, they often get lost because they can’t differentiate their landmarks. Then they have to be re-mapped.By using machine vision combined with fiducial markers, our inVia Picker robots know exactly where they are at any point in time. They can “see” and differentiate their landmarks. Nearly all LiDAR-based warehouse/industrial robots require some fiducial markers to operate. Machine vision-based robots require more markers. The latter requires additional time and cost to deploy long rolls of stickers vs fewer individual stickers, but when you factor in the time and cost to perform regular LiDAR mapping, the balance swings far in the favor of pure vision. At the end of the day, 2D machine vision in warehouse settings is cheaper, easier, and more reliable than LiDAR.If your use of robots does not require very high precision and reliability, then LiDAR may be sufficient. However, for systems that cannot afford any loss in accuracy or uptime, machine vision systems can really show their strengths. Fiducial-based machine vision systems allow operators to put markers exactly where precision is required. With inVia’s system that is picking and placing totes off of racking, placing those markers on the totes and the racking provides millimeter level accuracy to ensure that every tote is placed exactly where it’s supposed to go without fail. Trying to achieve this with a pure LiDAR system would be cost and time prohibitive for commercial use.GN: Why is there a better business case?Rand Voorhies: On the business side, the case is simple as well. Machine vision saves money and time. While LiDAR technology has decreased in cost over the years, it’s still expensive. We’re committed to finding the most cost-effective technologies and components for our robots in order to make automation accessible to businesses of any size. At inVia we’re driven by an ethos of making complex technology simple. The difference in the time it takes to fulfill orders with machine vision versus with LiDAR and all of its re-mapping requirements is critical. It can mean the difference in getting an order to a customer on time or a day late. Every robot that gets lost due to LiDAR re-mapping reduces that system’s ROI. The hardware itself is also cheaper when using machine vision. Cameras are cheaper than LiDAR, and most LiDAR systems need cameras with fiducials anyway. With machine vision, there’s an additional one-time labor cost to apply fiducials. However, applying fiducials one time to totes/racking is extremely cheap labour-wise and results in a more robust system with less downtime and errors. GN: How will machine vision change the landscape with regards to robotics adoption in sectors such as logistics and fulfillment?Rand Voorhies: Machine vision is already making an impact in logistics and fulfillment centers by automating rote tasks to increase the productivity of labor. Warehouses that use robots to fulfill orders can supplement a scarce workforce and let their people manage the higher-order tasks that involve decision-making and problem-solving. Machine vision enables fleets of mobile robots to navigate the warehouse, performing key tasks like picking, replenishing, inventory moves, and inventory management. They do this without disruption and with machine-precision accuracy. Using robotics systems driven by machine vision is also removing barriers to adoption because of their affordability. Small and medium-sized businesses that used to be priced out of the market for traditional automation are able to reap the same benefits of automating repetitive tasks and, therefore, grow their businesses.GN: How should warehouses go about surveying the landscape of robotics technologies as they look to adopt new systems?Rand Voorhies: There are a lot of robotic solutions on the market now, and each of them uses very advanced technology to solve a specific problem warehouse operators are facing. So, the most important step is to identify your biggest challenge and find the solution that solves it. For example, at inVia we have created a solution that specifically tackles a problem that is unique to e-commerce fulfillment. Fulfilling e-commerce orders requires random access to a high number of different SKUs in individual counts. That’s very different from retail fulfillment, where you’re retrieving bulk quantities of SKUs and shipping them out in cases and/ or pallets. The two operations require very different storage and retrieval setups and plans. We’ve created proprietary algorithms that specifically create faster paths and processes to retrieve randomly accessed SKUs.E-commerce is also much more labor-dependent and time-consuming, and, therefore, costly. So, those warehouses want to adopt robotics technologies that can help them reduce the cost of their labor, as well as the time it takes to get orders out the door to customers. They have SLAs (service level agreements) that dictate when orders need to be picked, packed, and shipped. They need to ask vendors how their technology can help them eliminate blocks to meet those SLAs. More

  • in

    Pandemic cravings: What robots delivered in 2021 by region

    Starship Robotics
    Look, it was a weird year. We were supposed to be emerging from a socio-political cataclysm, supposed to be getting back on track, but in a lot of ways, the drudgery just kept on keeping on. This is why it makes a lot of sense that comfort food ranked high on the list of items folks ordered from shops and restaurants that were then delivered by an emergent class of autonomous delivery robots.

    Innovation

    If you live in a city or dense suburb and don’t have delivery robots in your area yet, brace yourself: They’re coming. Delivery bots are designed to reduce car traffic and increase efficiency for last-mile urban delivery. They’re also pretty amazing data collection devices, which advocates say will help streamline operations and reduce waste but will also lead to profound privacy worries in the near future. One of the leading providers in the field, Starship Technologies, recently released its 2021 Robot Wrap Up, “highlighting the most popular and quirky requests and orders” that its fleet of more than 1,000 robots worldwide have received in the past year.And yeah … comfort food. In the U.S., that meant things like boneless chicken wings, which ranked most popular in the western states, and curly fries, which topped orders in the midwest. Chicken fingers were popular in the east and the south. Interestingly, pizza didn’t make the cut on Starship’s most-ordered list, which almost certainly says more about the distribution of the technology than market trends. In fact, pizza is having its own automation makeover, and owing to the fact that pizza preparation and handling is distinct from that of many other fast foods, which are fried, autonomous pizza making and delivering technology seems endemic to the sector rather generalized (see: 2022 will be the year of the pizza-making robot).When you look internationally, the picture starts to change (and the U.S., perhaps not surprisingly, doesn’t come out looking very healthy). Among British consumers, the most popular items delivered by Starship’s fleet were breakfast and included bread, eggs, and bananas. Bananas!One of the primary markets for Starship’s robots has been college campuses. That’s because local regulations are still a patchwork of inconsistent or non-existent guidelines governing the use of robots on public streets. Colleges, however, are contained ecosystems often with their own governing authorities. It was, of course, a tough year for college kids, who once again saw much of campus life cancelled among quarantine orders. Perhaps that’s why any individual’s record for the most orders goes to an unidentified individual at Arizona State University, who placed 230 orders with Starship during 2021. Go Sun Devils…

    Oregon State students claim the dubious record of having the most late-night orders. (Study sessions, maybe?) And parents of Northern Arizona University students will be proud to know that their students placed the most early morning orders.What does all this data tell us about robot delivery now and in the future? Not much, honestly. Though Starship has the most widely distributed delivery fleet, the footprint is still fairly small. But it is growing, and the volume is now becoming hard to ignore. Starship robots travelled, in aggregate, more than three million miles making deliveries in 2021, which the company proudly boasts is 13 trips to the Moon. That accounts for 100,000 road crossings every day and, over the lifetime of the company, which was founded in 2014, more than two million commercial deliveries globally.Now that 2022 is upon us, with a fresh wave of news about a new variant and unseen dimensions of unrest and chaos, it’s a safe bet we’ll see another important growth milestone for autonomous delivery. Comfort food anyone? More