More stories

  • in

    Famed “snakebot” can now swim

    A robot that’s developed something of a mythology over the years now has a new trick. Snakebot, named ground rescue robot of the year in 2017 and helping its creator win the “Oscars of automation” in 2019, can now swim. The robot consists of several actuated joints that work together to produce a range of motions. Snakebot can stand slither, roll, stand up to pull itself over obstacles, and climb a variety of objects and surfaces. CMU robotics professor Howie Choset and systems scientist Matt Travers are the brains behind Snakebot. Their creation can propel itself into confined spaces that dogs and people cannot reach.”We can go places that other robots cannot,” Howie Choset, the Kavčić-Moura Professor of Computer Science, told CMU’s Aaron Aupperlee. “It can snake around and squeeze into hard-to-reach underwater spaces.”Snakebot helped search through the rubble for survivors after a major Mexico City earthquake. Travers led a team to Mexico City in 2017 to use robot snakes in a search-and-rescue mission after the earthquake.Swimming is a new trick, and it adds impressive utility to a simple yet surprisingly capable modular design. The team from the Biorobotics Lab in the School of Computer Science’s Robotics Institute tested the new Hardened Underwater Modular Robot Snake (HUMRS) in a university pool recently, directing the robot through an underwater obstacle course of hoops.The robot may have defense applications. It was developed under a grant from the Advanced Robotics for Manufacturing (ARM) Institute to help the Navy inspect ships, submarines, and underwater infrastructure for damage or as part of routine maintenance. Currently that’s done by divers or delayed until ships can reach a dry dock, both of which involve substantial infrastructure, time, and expenses.But a robotic snake that can squeeze into tight spaces can do much of the inspection remotely.

    “If they can get that information before the ship comes into a home port or a dry dock, that saves weeks or months of time in a maintenance schedule,” said Matt Fischer, the program manager at the ARM Institute working on the project, who served in the Navy for three years. “And in turn, that saves money.”Interestingly, Fischer was once tasked with crawling into the ballast tanks of a submarine during his service, lending a personal triumph to helping equip a robot for the task.Of course, military technology often transfers to commercial applications. Infrastructure inspection has been a major area of development and deployment for robots, which have found ready adopters among customers like the oil and gas industry and utilities.Outside the military, the robots could inspect underwater pipes for damage or blockages, assess offshore oil rigs, or check the integrity of a tank while it is filled with liquid. The robot could be used to inspect and maintain any fluid-filled systems, said Nate Shoemaker-Trejo, a mechanical and mechatronics engineer in the Biorobotics Lab working on the submersible snakebot.”I’m surprised that we made this robot work as fast as we did,” Choset said. “The secret is the modularity and the people working on this technology at CMU.”Choset won the 2019 Engelberger Robotics Awards, the world’s most prestigious robotics honor. At CMU, he’s led teams developing modular segmented robots, including a snake robot used in disaster relief. More

  • in

    First gate-to-gate autonomous airplane flight

    A San Francisco-based company is claiming an aviation first with a gate-to-gate fully autonomous flight. You can see a video of the flight in the embed below.

    [embedded content]

    The company, Xwing, is setting out to introduce autonomous technology for regional air cargo, an overlooked space in the global race for autonomy but, with its sub-500 mile predictable routes and significant commercial importance, an intriguing entry point for autonomous air travel. Xwing is betting it can gain ground amid growing unmet logistics demand using its human-operated software stack that seamlessly integrates with existing aircraft to enable regional pilotless flight. “Over the past year, our team has made significant advancements in extending and refining our AutoFlight system to seamlessly integrate ground taxiing, take-offs, landings and flight operations, all supervised from our mission control center via redundant data links,” says Marc Piette, CEO and founder of Xwing. “Additionally, our piloted commercial cargo operations have been delivering critical supplies including COVID-19 vaccines, to remote communities since December 2020.”The recent news-making flight saw a Cessna Grand Caravan 208B leave the gate, taxi, take-off, land and return to the gate entirely on its own. The flight was remotely monitored and all air traffic control interactions were done from the ground.Recently, several companies have debuted air taxis, which promise to whisk passengers above traffic en route to their destination. Unmanned drones have now long been a part of the aerial landscape, but drones aren’t the only kind of self-driving aerial vehicle regulators have been dealing with. It may seem a foregone conclusion that self-driving cars are on the way, but we’ve heard less about autonomous aircraft, as I’ve written. That’s changing. Following recent crashes related to failures in autonomous systems on-board Boeing’s 737MAX, you might expect consumer confidence to have eroded significantly. However, a recent ANSYS study found that wasn’t the case. In fact, 70% of consumers say they are ready to fly in autonomous aircraft in their lifetime. Xwing’s entry into the market seems well-timed. New reports show a global gap of 34,000 open pilot positions by 2025. Logistics are also strained with growing demand for fast delivery. With the rise in e-commerce sales set to top $4.2 trillion, the success of e-commerce is inherently dependent on the efficiency of the air cargo industry. Xwing plans to leverage its technology for e-commerce logistics, enabling greater accessibility to small airports and more efficient cargo transportation. “The future of air transportation is autonomous,” Marc Piette, CEO and founder of Xwing, told us last year. “We believe the path to full autonomy begins with the air cargo market, and involves remote operators supervising fleets of unmanned aircraft.” More

  • in

    Pandemic is pushing robots into retail at unprecedented pace

    One of the striking trends during the pandemic has been the acceptance of automation technologies by a previously tepid public. Retail in particular has accelerated development of automation, including robotics, which will result in a quick rollout over the next few years.A new survey by RetailWire and Brain Corp, an artificial intelligence (AI) company creating core technology in robotics, including cleaning robots, supports the conclusion that COVID-19 has hastened automation development and adoption. Robots used for tasks such as floor cleaning and shelf scanning, both in stores and in warehouses, are selling briskly, and sentiment among retailers broadly supports adoption. The survey results are included in an executive summary available online, “Robots in Retail: Examining the Autonomous Opportunity.”Among the top line results, 64% of retailers believe it is important to have a clear, executable, and budgeted robotics automation strategy in place in 2021, including 77% of large retailers. That’s striking considering robots in retail wasn’t really a thing as recently as the 2010s. Now, nearly half of the respondents to the survey say they will be involved with an in-store robotics project within the next 18 months. “The global pandemic brought the value of robotic automation sharply into focus for many retailers, and we now see them accelerating their deployment timelines to reap the advantages now and into the future,” said Josh Baylin, Senior Director of Strategy at Brain Corp. “Autonomous robots are versatile productivity partners that help keep stores clean, generate additional hours for employees, and help improve in-store customer experiences.”One of the big drivers of adoption during the pandemic has been the hyper focus on cleanliness. Stores have ramped up cleaning protocols, but the end of the pandemic will leave lasting expectations. According to the survey, the vast majority (72%) of respondents say they do not anticipate much change in consumer expectations toward in-store cleanliness even after vaccines are broadly distributed. About the same number of large retailers, 73%, say the importance of using robotics in warehouses or distribution centers has increased due to factors that emerged during the pandemic.The need to maintain safe and clean conditions has coincided perfectly with a growing awareness of the benefits of retail automation for data-driven insights. That’s precisely the sales pitch of companies like Simbe and Bossa Nova, which make robots that scan merchandise autonomously to find out of stocks and misplaced SKUs but also to derive insights on buying habits and micro trends, the kind of data that’s driven Amazon to the forefront of the retail pack.Robotic technology in retail has been gaining momentum, but RetailWire said it found the new accelerated adoption trends expressed in the survey “stunning” and “surprisingly large.”

    “These are not the kinds of numbers indicative of an emerging technology in an early phase of deployment in retail, but of a technology just a few short years from widespread adoption,” according to the report. “In fact, as robotic technology gains a foothold in-store operations, broader benefits are likely to fuel future growth, such as the ability to capture granular, real-time data about products on shelves and customer buying patterns, monitor pricing and planogram compliance and keep tabs on out-of-stocks. Armed with this kind of data, retailers will be able to discover actionable insights, make smarter decisions and increase store productivity.” More

  • in

    Off road: Autonomous driving's new frontier requires a new kind of sensor

    While automakers and investors alike are placing big bets on autonomous vehicles, a remaining requirement for full-integration of AVs lies in automated technologies being able to operate in poorly-marked road surfaces, off-road terrain, and in inclement weather. By overcoming these last common obstacles encountered by traditional lidar and camera-based sensors, the industry will reach a critical step in the further development of ADAS and AVs that are both safe and reliable. A technology called Ground Positioning Radar has shown incredible promise when it comes to reaching that final step. A company called WaveSense, is currently the world’s only provider of GPR for precise localization of autonomous and highly-automated vehicles, and the company recently announced a funding round of $15 million. WaveSense has become an integral company within the ADAS industry, having been awarded in 2019 the top autonomous driving project and best-in-show at the North American International Auto Show and last year appointed former Ford President, Joe Hinrichs, to the Board of Directors.I wanted to know what the future holds for GPR, as well as the role the technology play in the crucial adoption phase of autonomous driving, so I reached out to WaveSense CEO Tarik Bolat.GN: What’s the biggest sensing capability hurdle currently to broad adoption of autonomous vehicles?Tarik Bolat: Current ADAS features enabled by lidar and camera-based systems for autonomous vehicles are accurate to a certain degree but lack the reliability to deliver a consistently safe journey. Common driving conditions such as inclement weather, debris in the road, lack of clear lane markings or strong GPS signals can render the typical sensors useless and force drivers to take over — sometimes with little notice — or in the case of an AV, disengage. Considering these difficulties, there is a lack of consumer confidence in advanced ADAS capabilities, with a 2021 AAA survey reporting that 80% of drivers wanted “current vehicle safety systems, like automatic emergency braking and lane keeping assistance, to work better.” This indicates that while there is a market demand for these technologies, current offerings are not meeting customer expectations. To meet the needs of today’s drivers, automated and autonomous vehicles need WaveSense’s ground positioning technology to help mitigate common issues, deliver automotive grade reliability, and increase consumer confidence in ADAS programs.

    GN: How is GPR different from what’s out there in terms of capability? Tarik Bolat: The issue with today’s ADAS technologies such as lidars and cameras is that they rely solely on visible, static surface features like signs, buildings, or lane markings amid dynamic environments that are not always predictable—resulting in features that are hamstrung by their unreliability. Ground Positioning Radar (GPR) technology differentiates itself by peering directly into the Earth, which is very rich in features and stable over long periods of time, and provides centimeter-level precise positioning anywhere no matter what the conditions are on the surface. By adding WaveSense’s GPR technology, automakers are enhancing their vehicles with more reliable and accurate ADAS features—including autonomous parking and active lane keeping—safeguarding the automated driving experience. GN: Why hasn’t GPR been more broadly adopted in autonomous vehicle applications?Tarik Bolat: While existing sensors like lidar and cameras seek to replicate human cognition, GPR is driving a shift in perspective of how to solve the thorniest problems in autonomy by leveraging data that isn’t available to the human eye. That’s a significant leap forward in how to conceive of solving the problem, and one that makes for additional customer education. WaveSense has been busy educating the market about this shift, and as a result is now working with some of the largest automotive companies in the world targeting high volume deployment of WaveSense’s GPR for ADAS and autonomous features.GN: One of the interesting things about GPR is its application to off-road conditions. There are all sorts of military, commercial, agricultural, and even recreational implications there. What do you think will be the first applications of autonomous vehicle technology?Tarik Bolat: Industrial applications with limited operating domains are a good bet for first implementations of fully autonomous vehicles — yards, ports, airside operations, etc. And we believe GPR is an essential part of the equation there, given those environments are typically less challenging from a perception perspective (relative to public roads) but arguably more challenging from a localization perspective since they’re often in settings without surface features, or with highly dynamic settings.  That said, our focus is on delivering the highest impact in the largest market. Today, that means taking the ADAS capabilities on high volume vehicles that are currently considered performance features that are on-again, off-again depending on the road conditions and converting them into features that work all of the time.GN: Realistically, how do you think GPR technology will be integrated into existing sensor stacks? How does it complement more broadly deployed sensor technologies?Tarik Bolat: Integration of WaveSense is straightforward in that it delivers a robust position that the vehicle uses – akin to a GPS that has cm level accuracy nearly all of the time – making vehicle localization a reality even in the most challenging road conditions. What this means for automakers is that it can be the primary positioning sensor going forward and will be complemented by the other more  standard sensors in the stack. And unlike cameras and lidar, which fail under similar circumstances, WaveSense is uncorrelated from any other inputs, driving new levels of robustness since the likelihood of a common point of failure becomes vanishingly small. More

  • in

    Why these doctors are embracing AI to make triage decisions

    There’s been a tremendous amount of development around automation in healthcare, thrusting physicians and medical personnel toward a collision with automation that’s long been familiar in industrial settings. Radiologists in particular are being confronted with a massive transformation as AI becomes more capable of interpreting scans.Is this a threat to radiologists?Discussions around the use of artificial intelligence to review CT scans are often framed as robots replacing humans. But instead of running scared of AI, Radiology Partners, the largest physician-owned practice in the US servicing 3,000+ hospitals, is running to embrace it. Specifically, the Radiology Partners is adopting Aidoc’s radiology-AI platform to help triage patients so that physicians see and tend to the most urgent cases first.So what’s the deal? Are they throwing in the towel?To figure out why Radiology Partners—which, conspicuously, is physician-owned and was valued in 2019 at $4 billion—is placing a big bet on AI, I reached out to the company’s leadership. It turns out the path to embracing the technology wasn’t always clear. “Like most new things, the initial reaction to AI in radiology was mixed — a combination of fear, confusion, and excitement,” says Dr. Nina Kottler, RP’s Associate Chief Medical Officer for Clinical Artificial Intelligence. “In part because of excessive hype around the image recognition capabilities of AI as represented by AI experts and in the popular press, there was a great deal of fear that AI would replace radiologists.”That’s a typical fear surrounding automation. So how did the organization overcome it?

    “That fear has mostly resolved now that people had a better understanding of the realistic capabilities of AI,” says Dr. Kottler. “Other emotions included an anxiety about the unknown — the ‘black box of AI’ and a resistance to an unfamiliar new technology. Some of that resistance was well placed — there are hurdles to creating and deploying a radiology AI algorithm successfully. So, a degree of skepticism for any new technology is healthy and important to ensure there is sufficient oversight in creating and deploying clinical algorithms. The skeptics who missed the point are those that haven’t kept up with the latest literature or with the thoughts of experts in our field. Those who continue to be resistant don’t recognize the value in AI, or think AI will replace physicians. As Curt Langlotz said, AI won’t replace radiologists, but rads who use AI will replace those that don’t.”That’s the thrust of the company’s argument for running toward the AI revolution as opposed to away from it.”We embrace AI because we believe it adds value to the healthcare system overall, even beyond the specialty of radiology,” Rich Whitney, CEO and Chairman for RP, tells me. “With our experience deploying AI algorithms we found that AI improves patient outcomes, decreases variability in follow-up and treatment recommendations, provides a back-up to ensure patient follow-up, decreases cost to the hospital, payor and patient, improves diagnostic accuracy and decreases physician burnout… a pretty impressive list of benefits! And we are only scratching the surface of what is capable. In the future physicians + AI will be providing precision care with guidance specific for the individual patient and predictive care to help prevent rather than simply treat disease. We don’t see AI as a threat to radiologists, but rather an opportunity for the profession to add more value and elevate their role in the healthcare system.”What’s clear is that this is just the beginning for AI in radiology and beyond.”AI has the ability to affect every component of the imaging lifecycle,” says Whitney. “In the future, AI algorithms will redefine the patient experience in scheduling radiology exams, decrease radiation dose and imaging duration, ensure exams don’t need to be repeated by automatically protocolling based on identified pathology on the fly, provide the patient with a patient-friendly, interpretable version of their report incorporating relevant data from various other sources, and help patients schedule relevant follow-up imaging. Physicians will also notice a difference as a greater breath of quantified information will be available beyond that visible by the human eye. Segmentation and quantification of normal anatomy with population comparisons will provide imaging data that is akin to obtaining a blood laboratory panel, pathology will be measured volumetrically and tracked over time making it easier to determine the effect of treatments on disease, and risk stratification based on imaging findings will help clinicians prevent disease.”Adds Dr. Kottler: “With our new partnership with Aidoc and our leading market position interpreting approximately 1 in 10 imaging exams in the US, we are on the cusp of being able to offer this kind of game-changing value proposition to our partners. In the meantime, we have attacked the issue of resistance and lack of engagement through education. It is essential to not only ensure physicians have accurate information about the capabilities and limitations of AI, but also to provide them a vision for their role in an AI enabled future. Beyond investing in AI technology and the infrastructure to support it, we are investing in our radiologists to help shape that technology as they will be front and center in our AI enabled future. A future that holds great promise for patients and the healthcare system overall.” More

  • in

    Why you may be counted by a 3D camera

    The vaccine rollout is being met with lifted COVID-19 restrictions inside buildings and restaurants, but this change presents a new challenge to business owners — managing increased occupancy, while still abiding by safety restrictions. Businesses that find themselves exceeding occupancy could face fines, citations, and license suspensions.One increasingly prominent solution employs 3D counting and tracking cameras that monitor occupancy, foot traffic, and flow inside brick-and-motor locations. Regular 2D cameras and traditional counting techniques are not accurate enough. However, depth-sensing 3D cameras can provide real-time updates that increase counting accuracy by an estimated 5% to 8%, according to a spokesperson for one 3D company I spoke with, Orbbec. That difference in accuracy is crucial when limiting traffic is a matter of health and safety.Of course, privacy considerations and technology adoption issues will tell the tale when it comes to 3D camera technology. I know Orbbec for their technology’s use in robotics, but the people counting application left me intrigued. To find out more I reached out to David Chen, Co-Founder and Director of Engineering at Orbbec.

    GN: What are the key differences between standard visual spectrum 2D cameras and 3D camera technology?David Chen: The major difference between standard visual spectrum 2D cameras and 3D cameras is that 3D cameras provide direct and true distance information while a 2D camera cannot. In addition, standard visual spectrum 2D cameras see through visible light within a very large wavelength coverage, meaning the quality of the image is highly dependent on environmental lighting and illumination. Orbbec’s technology in particular uses a NIR (near-infrared light) source and a narrow-band optical filter. The narrow-band filter helps to minimize the influence of environmental illumination (commonly known as noise); on the other aspect, the active light source, passing through DoE (diffractive optical element) can form different patterns that can be used in the 3D reconstruction.These features allow the 3D cameras to accurately perceive depth and can be used as a navigation solution for self-propelled robots in healthcare, food service, warehouses, schools, offices, and other places populated with people or other objects that are on the move. 3D cameras and sensors are also better suited than 2D for facial recognition at high-security locations like payment kiosks and ATMs.

    GN: Why are depth-sensing cameras better for counting people in real-time than 2D cameras? How do accuracy rates compare?David Chen: 2D cameras can be easily fooled, leading to inaccurate results. This typically happens in the form of tailgating or piggybacking, when two or more people are grouped and trick the camera into thinking they are one. 3D imaging systems typically include multiple RGB or IR (infrared) cameras that add distance and/or depth information to each pixel in the image. This allows the distance between objects to be accurately measured in a scene, all in real-time.With the depth information taken from a 3D camera, programmers can then accurately track people with a simple algorithm, especially useful in complex scenarios or locations, like a crowded entrance. As brick and motor locations need to keep a precise count of the number of people entering their locations, the difference in accuracy can be critical. 3D cameras for people counting increase accuracy by 5% to 8% compared to 2D technology. According to our partner and clients, some 3D-based counting systems can reach up to 99% accuracy. In restaurants, 3D cameras can be programmed to count, track, and log not only the number of people in a room but their position, movement and grouping as well. 3D cameras can also be programmed to scan and recognize objects, like the number of similar items at a grocery checkout or on a cafeteria tray. This can be an invaluable tool in shipping, packaging, manufacturing, and warehousing operations.GN: Privacy is obviously a concern in any commercial application. How can deployments meet the need of the pandemic while maintaining privacy?David Chen: Compared with 2D methods, 3D increases privacy because it only shows depth information if needed. Around the world, 3D facial recognition is in daily use at millions of locations. Customers actually prefer 3D facial recognition in commercial and safety applications for its speed and convenience.For people counting and tracking solutions, 3D technologies (and their accompanying algorithms) don’t rely on traditional 2D images or videos. 3D cameras only record sparse 3D point clouds (points within a 3D image) that can be recognized only by computers. Making 3D cameras an even more effective way to protect privacy All 3D cameras from Orbbec have traditional RGB cameras within but we provide customers the option to use it or not. And we are also able to help remove the RGB camera at the customer’s special request for enhanced privacy requirements.GN: Where is Orbbec’s technology currently being implemented related to the pandemic? Can you describe the kind of setup involved for end-users?David Chen: 3D cameras are helping to ensure that safety protocols are being followed in quick-serve establishments, stores, bars, and restaurants. When stationed high in a ceiling, for example, 3D cameras can track occupancy, flow, and clustering. It can automatically warn managers when social distancing measures are not being followed.3D cameras make contactless ordering possible via eye-gaze tracking or “air pointing.” Stores are upgrading ordering kiosks with 3D cameras to order and pay without touching. Elsewhere, sanitizing robots use SLAM (Simultaneous Localization and Mapping) technology to avoid obstacles; robots can sanitize entire hospitals without exposing cleaning staffs to infection.The top five applications for our 3D cameras include body/people counting; biometric payment — commonly in the form of facial payment technologies; air pointing; a tool to help maintain social distancing (i.e. keeping safe working distances on job sites or ensuring proper distancing for people in queues); embedded into robotics technology — such as delivery robots or sterilization robots found in hospital.GN: How competitive is the depth-sensing space for COVID-19 applications? Is this a sector that’s seeing a lot of competition currently?David Chen: The depth-sensing technology is replacing and combining with the 2D technologies gradually. We do see many companies are seeking to integrate people counting and depth-sensing technology every day. Not only does this technology help manage social distancing, but it helps increase sales and shopper convenience in the retail world. A May 2020 CapGemini Research Institute study on contactless customer experience found that 84% of U.S. consumers expect to increase their use of touchless technologies during the COVID-19 crisis, to avoid interactions that require physical contact; of those, 55% expect to use touchless technologies even after the crisis ends. Fifty-two percent of respondents said they prefer facial recognition for authentication at retail stores, banks, airports, and offices during the COVID-19 crisis.Orbbec’s partner Moptar has found people tracking can visualize the movement of all customers in the store and verify the effectiveness of various measures. For example, 3D camera technology allowed an electronics store to verify the effectiveness of in-store advertising and increase the number of visits to the target sales floor by 1.4 times. So, when companies and consumers really begin to understand the advantages of 3D cameras, we expect competition to only grow.

    Coronavirus More

  • in

    Best surveillance drone in 2021

    Security and surveillance are one of the biggest growth areas in the ever-expanding UAV sector. While it’s a relatively recent addition to enterprise toolkits in many industries, the use of drones to provide aerial assessments of activities on the ground is actually a return to form for the technology, which has seen some of its most ambitious development in defense applications. 

    The reasons are clear. Aerial vehicles can cover vastly more terrain than slower, clumsier ground-based surveillance systems — which is why they’ve been a key component of military and law enforcement applications for decades. But drones, which are smaller, cheaper, and more efficient than manned-aircraft like helicopters, have very quickly democratized access to aerial security and surveillance and opened up the skies to companies of all sizes across sectors.The current lineup of security drones reflects the variety of use cases out there, from fixed-wing models that can cover large areas quickly to nimble quadcopters that scan confined perimeters and elaborate structures with a variety of sensing and monitoring equipment. They were selected to provide a practical, best-in-class list based on feedback from users in relevant industries as well as a representative cross-section of specializations in a fast-growing sector.Here are our picks for the best security and surveillance drone systems available.

    Best UAV for remote operation

    Want a surveillance and security drone that was used at the Super Bowl to monitor crowds and has been selected and deployed by the Miami PD? Easy Aerial’s Smart Aerial Monitoring Systems (SAMS & SAMS-T) are mobile, durable, and fully autonomous drone-in-a-box solutions. The system can be operated completely remotely, helping the tech stand out in a crowded field.Designed specifically for perimeter security, the system consists of a lightweight falcon quadcopter with a fly time of about 50 minutes, a durable self-sustaining ground station that charges the Falcon while keeping it safe from the elements, and a proprietary fleet manager and communication system. Systems have been successfully sold in the US, Europe, Israel, Thailand, Japan, and Central Africa.

    View Now at Easy Aerial

    Best “no training required” UAV


    Skeyetech by Azur Drones makes an alluring promise: It requires no pilot training for security guards and no preexisting flying skills. The drone is deployable in less than 30 seconds and everything from takeoff and landing to path-planning is handled automatically. Security teams follow automatic missions or order live missions via Azur’s integrated Video Management System, which provides real-time HD video to security HQ.One advantage of completely autonomous operation? It can be utilized for 24/7 deployments. Like many enterprise security drones, a base recharges and protects the drone between missions. The drone is also equipped with a pyrotechnic recovery system and an ultra-performing geo-caging system for aerial safety.

    View Now at Azur Drones

    Best UAV for thermal sensing

    Percepto’s all-in-one system consists of an ultra-tough drone named Sparrow, which carries 4K RGB and thermal cameras. The drone is housed and recharged in a base station designed to handle the elements. The platform runs on the PerceptoCore software suite for flight management, data storage, report management, permissioned team access, and computer vision and AI-powered applications.The company is targeting industries like oil and gas and mining for applications like site safety, security, and compliance. Many of the subtler differences between drone systems come down to the use cases they’re designed to meet. Percepto’s system, for example, can monitor temperature variations at temperature-sensitive locations, which is a feature that more security-focused systems don’t have.

    View Now at Percepto

    Best multi-modal drone

    A winged drone that can take off and land vertically? Inspired by sci-fi Thunderbird aircraft, the Avy Aera VTOL Drone isn’t explicitly a surveillance tool, but situational awareness certainly falls into its brief. The Drone has been used for everything from wildlife protection to medical deliveries. Unlike quadcopters, winged drones have a substantially longer range and are more energy-efficient. But quadcopters are much more nimble than winged drones. Avi combines the best of both worlds.The Aera VTOL is self-flying and designed to hold a payload. That could mean deliveries of medical supplies, but it could also mean a sensor suite, making it useful for a number of security and recon applications. Powered by Auterion, a drone management infrastructure, it’s capable of third-party integrations, making it a fantastic base to build on.

    View Now at Avy

    Best heavy payload UAV

    Have a job of a more delicate nature? Intelligence or law enforcement work, perhaps? Impossible Aerospace is a good place to start. The company makes the US-1, a drone engineered with features explicitly designed for law enforcement as well as operations such as firefighting, disaster response, and critical infrastructure.Not surprisingly given that mission brief, the drone is capable of heavy-lift applications, with a payload capacity exceeding six pounds. It has an impressive 75-minute flight time to boot, longer than most security quadcopters. The US-1 is currently certified with FLIR and Workswell camera systems but can be configured for customized payloads as well. It’s also powered by Auterion, a state-of-the-art drone management infrastructure

    View Now at Impossible Aero

    Best lidar-equipped UAV

    Microdrones has a unique payload proposition: Lidar. Three years in the making, the company has integrated its heavy lifting MD4-3000 drone with a Riegl miniVUX-1DL and a SONY RX1R II camera for rapidly producing colorized pointclouds. Surveying, land development, infrastructure inspection, environmental monitoring, precision agriculture, and public safety are done more efficiently and effectively with the help of Microdrones solutions. If your aim is to inspect large infrastructure projects, Microdrones offers one of the most powerful toolkits on the market. 

    View Now at Micro Drones

    Best inspection and inventory management UAV

    Kespry’s value proposition is a drop-dead simple drone technology stack that makes inspection and inventory management a snap. The S2’s rated 30 minute flight time makes security applications limited, but the technology stack incorporates complete site modeling via lidar and thermal sensors and offers powerful analytics for industries like insurance. Add to that tablet-based path planning and features like automatic takeoff and landing and this is truly a push-button solution for a variety of aerial inspection applications.

    View Now at Kespry

    Best adapted consumer UAV

    DJI is the world’s leader in civilian drones and aerial imaging, so it makes sense that they’ve extended their dominance into the security and surveillance arena. The DJI M2E Dual is a new comprehensive drone solution created specifically for use in security and monitoring situations. There are a number of mission-specific modular tools and sensors available that fit seamlessly onto DJI’s popular Mavic 2 platform. DJI offers a customized flight control app and restricted zone management. Use cases include wildlife management, wildfire mitigation, and security applications.

    View Now at DJI

    Growing drone security marketDrones are being used in a wide variety of security applications, from surveilling perimeters and conducting 24/7 surveillance of huge sectors to responding to alarms and conducting initial assessments. But there are a few things prospective enterprise drone customers need to know.First among these is that commercial drones, including both piloted and autonomous drones and those used in security capacities, are subject to strict FAA regulations. In the US, drone services companies and enterprise customers have been frustrated by a stringent regulatory environment that restricts many kinds of non-recreational drone use. The FAA has made overtures toward loosening drone use in enterprise settings with its Unmanned Aircraft System (UAS) Integration Pilot Program, but it could be years before there’s a practical framework for commercial drone activity over populated areas.The deployment of small unmanned aircraft is restricted under the FAA’s Rule 107. Currently, small drones must stay within line of sight of an operator and can only fly over people without their express consent, factors that make it all but impossible to scale delivery anywhere other than niche environments like golf courses. Waivers are available in some cases, but the regulatory environment has severely restricted commercial drone use in the US, including insecurity.

    “The FAA is under a lot of pressure to solve these problems and get ahead of the curve with drones,” Tyler Collins, VP of Airspace Services for PrecisionHawk Unmanned Systems Innovation, a commercial UAV company, told ZDNet. “There’s a need to expand the current regulatory environment of what’s allowed with UAVs, and the question the FAA is working to answer now is how we ensure that we’re going to do that safely as hundreds of thousands of these things start to take flight.”Nevertheless, there are many environments and use cases where security drones can safely and legally be deployed. These include infrastructure and pipeline projects in access-controlled sites, mining operations, golf courses, and other venues where consent forms can be collected, and a variety of other industries like insurance, disaster response, wildlife management, and fire mitigation, for which the FAA has been inclined to grant waivers or the current regulation leaves enough room for pilots to operate.What to look for when evaluating dronesIn making the drone selections above, we considered a number of these use cases, as well as several factors related to who will be deploying the drones and how they will be used. One of the most popular developments in security drones is the so-called drone-in-a-box concept, in which a portable, weather-resistant base and recharging station also operates as a drone garage, of sorts. Systems like this tend to be rapidly deployable and offer automated takeoff and landing functionality, making them particularly suitable to sites where on-call 24/7 security is required.On the other end of the spectrum, fixed-wing drones are capable of covering vast distances and typically have significantly longer flying times per charge than multi-rotor vertical takeoff and landing (VTOL) drones. The drones sacrifice the agility necessary for many inspections and close-quarters applications, but for wildlife management and fire mitigation, a fixed-wing drone can come in handy. We included the Avy Aera, which adds VTOL capabilities to a fixed-wing drone because it does a great job of offering the best of both worlds.Perhaps the most important consideration for prospective enterprise drone customers is sensor package. Will the UAV operate during daylight hours and cover relatively open terrain? If so, a simple visual spectrum camera may be sufficient for surveillance needs, and something as simple as DJI Mavic 2 is recommended. But specialized use cases required more sophisticated equipment. Thermal sensing helps identify body heat from humans and animals even when there’s dense ground cover or vegetation, making it essential in wilderness applications or if drones fly after dark. Lidar, with its high-fidelity point clouds, is great for monitoring infrastructure for project progress or structural integrity. Ultrasonic sensors have also been deployed successfully on UAV. Another consideration is whether your company plans to consider an investment in security drones a capital expense or whether a drone-as-a-service model offers more flexibility. In a bid to spur market interest and penetration, most enterprise drone companies have some DaaS options. A final consideration is the level of autonomy desired. Many drones come with some form of obstacle avoidance and automated features such as pilot-free takeoff and landing. Some drone systems, such as Kespry’s, do away with joysticks altogether and allow pilots to manage flight plans and choose routes entirely via tablet-based interfaces, which extends the reach of drones beyond highly skilled pilots. One word of warning related to data security: Because unmanned aircraft often conduct video surveillance of sensitive locations, it’s important to protect data via encryption and to ensure that valuable company assets aren’t unwittingly being shared with suppliers or left vulnerable. Any reputable company offering enterprise drones will take security seriously, and the drones on our list are no exception.

    ZDNet Recommends More

  • in

    Best telepresence robot 2021

    How can remote workers make their presence known in their organization? How can enterprises overcome the limitations of video conferencing and enable a level of communication and collaboration that approaches on-site interaction?Telepresence robots have been on the scene for the better part of a decade, though as global upheavals reshape work and reorient attitudes toward remote participation, the technology may finally be primed to break out of its niche user base and go mainstream. The timing is fortuitous: The market is now mature enough that consumers have choices when it comes to feature set and price point. As companies downsize physical locations and revamp their policies toward distributed workforces, telepresence offers both technological benefits and collaboration advantages that will appeal to some employers and workers alike.The current telepresence lineup reflects the range of use cases and intended end-users out there, including a handful of models designed for specific fields and workflows, as well as others that fit organizations of any size. They were chosen based on a wide survey of this growing product category and by speaking with company representatives and end-users about their experience.These are our picks for the best telepresence robots out there right now. 

    Best budget telepresence

    In the battle for low-cost, truly robotic telepresence, OhmniLabs has been giving rival Double a major run for its money. At under $2699, the Ohmni Robot weighs just 20 pounds and folds up, meaning you can take it anywhere, but still manages all the functionality you need in a telepresence robot. It features wide-angle, low-latency streaming at HD+ resolution and real-time full-resolution zoom to read whiteboards or see fine details at full UHD 4K detail.A secondary dedicated wide-angle navigation camera lets you see around the base of Ohmni while you’re driving, which you can do remotely from just about any standard device. The unit features a bright 10.1-inch screen and integrated Jabra speakerphone for great audio. It doesn’t have automatic rising and lowering like Double, but the robot can move its head side to side for natural interactions.OhmniLabs is also thoughtful about who might use the device, which has dual-band Wi-Fi radio with full 2.4GHz + 5GHz support and optimized background scanning and roaming for large spaces. Full 802.1x support means it should be simple to run on business or school networks.

    $2,699 at Ohmni labs

    Best bang for your buck

    Where the Double 2 used a tablet display, Double 3 replaces the iPad with a fully integrated solution using an Nvidia Jetson TX2 GPU, two Intel RealSense depth sensors, two high-resolution cameras, and a beamforming microphone array. In place of the iPad is an integrated screen and new feature sets, including AR overlays, that really step up the functionality and feature set game of the Double.Some of those features include a new click-to-drive interface, obstacle avoidance, and pan/tilt/zoom video, all of which contribute to a fully immersive remote experience that’s still intuitive to use. Perhaps the biggest functionality upgrade is the addition of mixed reality overlays. In Double’s version of mixed reality, virtual 3D objects are added into the video stream to appear as if they’re in the real world. Virtual objects include helpful waypoints to make the video feed more informative during navigation. The Double 3 with charging dock runs $3,999. If you already have a Double 2, you can upgrade your current device with a Double 3 head for $1,999.

    $3,999 at B&H

    Best telepresence for high-end corporate settings and hospitality

    With the Ava Telepresence robot, remote users easily and safely navigate through large workspaces, event spaces, and retail spaces with an enterprise-grade video conferencing system designed to make interacting with people on-site feel natural.Unlike lower-priced models, the robot features intelligent, autonomous navigation. Remote users simply specify a destination, and Ava automatically moves to the desired location while avoiding obstacles. The technology is slick: The robot utilizes advanced mapping to learn the local environment and create a realistic map of the area, which enables it to navigate at the push of a button. Obstacle avoidance we’re used to seeing on autonomous mobile robots in fields like logistics and fulfillment enables Ava to navigate around people and avoid tumbles down the stairs.Perhaps Ava’s biggest selling point is its form factor. This is one sleek unit, making it ideal for applications in client-facing offices and sectors like hospitality. It’s also secure. Embedded enterprise-grade security (including encryption, secure HTTPS management, password protection) means Ava is well suited to a corporate IT infrastructure.

    View Now at Ava Robotics

    Best desktop video conferencing

    Meeting Owl is a 360-degree video and audio conferencing system that automatically focuses on the people speaking in the room. It doesn’t move, so it’s not a robot by most definitions, but its autonomous functionality makes it an excellent and highly affordable tabletop system for individuals and teams that routinely conference and collaborate remotely.Eleven-inches tall, Meeting Owl uses an eight microphone array to pick up sound and lock in on the person speaking. Remote viewers on the other end get a panoramic view of all the meeting attendants and a close-up view of the current speaker.The system comes in original and Pro versions. The Pro version improves on the Meeting Owl’s 720p picture and increases audio pickup range from 12 feet to 18 feet, which is especially useful for larger teams or any collaboration utilizing a whiteboard. The system integrates with all the major video conferencing services so usability is a snap. The Pro version goes for $999.

    $999 at B&H

    Best telepresence for education

    Kubi is an inexpensive ($600) robotic docking cradle for tablets that augments the teleconferencing experience you’re used to with the addition of movement. During video conferencing, the remote participant can steer the cradle to look around a room. “Kubi” means “neck” in Japanese.That makes it a particularly useful device for team environments where one participant is remote. The remote worker sits at a laptop or desktop but is able to look around the room to engage with speakers, which the device’s developers say enhances the interactive experience. An enhanced audio kit and a secure docking retrofit to keep tablets secured to the base make them good options for educational environments where learners have to beam into larger classroom settings and engage in conversations but won’t necessarily have to move around the classroom. 

    $600 at Kubi

    Best telepresence for conferences and large events

    Anyone in tech or a tech-adjacent industry will be familiar with the sight of telepresence robots roving around conference room floors as virtual attendants beam in remotely.Beam is comfortable in offices and is used by some of the biggest companies in the world, but this robot from Suitable Technologies really shines in conference settings, where it’s nimble enough to bounce from keynotes to breakouts to hallway banter.Beam has four wheels (the pro version has five for increased stability and maneuverability) and wide-angle navigation cameras. The entire ecosystem was built in-house, which means participants must use Beam’s app. The advantage is security, which is best in class. Using industry-standard technology such as TLS/SSL, AES-256, and HMAC-SHA1, Beam encrypts all communication that travels through our system to ensure your calls remain private and secure.

    View Now at Beam

    Best telemedicine device for healthcare

    VGo’s parent, Vecna, knows the healthcare sector, so it makes sense that the company has developed a telepresence robot that enables healthcare providers to deliver lower-cost services and improved quality of care virtually. Telemedicine is certainly having a moment as providers figure out ways of reducing in-person visits, but the robot has also been used to enable homebound students to go to school virtually. Using the VGo application on a PC or Mac, an internet-connected person located anywhere connects to a VGo in a distant facility. VGo can be shared by a set of people or dedicated to a single person using standard web accounts and permission settings maintained by the admin.VGo is lightweight, contributing to its excellent battery life, which is best in class at 12 hours. That makes it ideal for clinical environments and hospitals.

    View Now at VGo

    Advocating for telepresence Offices are coming around to telepresence solutions for remote workers, and the recent health crisis has put the transition to distributed workforces into hyperdrive. Teachers and school administrators are now also embracing remote learning, which, in the short term, can quell infection rates — but, in the longterm, may be a way to maximize limited resources while bringing needed services to students.Markets and Markets estimated the overall telepresence market will be over $300 million by 2023. However, that market research doesn’t take into account the rapid adoption of remote work due to COVID-19 or the expected long-term effects of the global stay-at-home experiment on attitudes toward remote working. Pivoting out of the pandemic, many companies may embrace a partially distributed workforce, which is a huge opportunity for developers of telepresence and video conferencing systems.For workers, employers, and IT pros who wish to advocate for telepresence systems, the most important strategy is to tout the collaborative benefits of the technology and to have a plan for implementation. Robots in the workforce carry a longstanding stigma. Coupled with lingering resistance to remote work situations, existing biases on the part of employers or employees could stop the proposed adoption of telepresence dead in its tracks. 

    But advocating for telepresence as a way of maximizing collaboration and approximating the productive magic that happens in unstructured interactions in hallways and face-to-face chats can help mitigate concerns. As can explaining that most telepresence systems are ready-to-go out of the box with intuitive user interfaces. The technology is carefully designed not to need extensive training to use. After all, most humans don’t need training to have natural interactions in person.What to look for in evaluating telepresence robotsThe biggest questions to ask are who might use a telepresence solution and in what settings. If you’re just looking to enhance video conferencing without spending big bucks or implementing new processes and protocols, solutions like Meeting Owl or Kubi would be the best places to start.However, for those willing to embrace the dynamic features offered by a mobile robot, consider whether your environment is client-facing. A slick robot like Ava makes a great impression, although it comes at a price. For most SMBs, models from Double or Ohmni are likely to be smart bets. They’re relatively inexpensive and provide a seamless user interface. A company can get by with one shared robot to start and easily scale up to meet needs.After all, once one remote employee gets a robot doppelgänger, it’s likely others will want them as well.Other options to considerThe goal of telepresence is to seamlessly integrate remote workers into physical locations. But, in 2021, with work totally transformed and record numbers of workers staying remote for the foreseeable future, that use case may have less urgency for office workers. (The use case for telepresence designed for medical professionals, however, has never been clearer.)If all of your colleagues are remote, as well, there’s not much call for a robot that can roam the halls. If you’re stuck at home and suffering from epic levels of Zoom fatigue, I’ve had excellent luck with Facebook Portal, which integrates video conferencing with all the functionality of an Alexa-powered home assistant. It’s not technically a robot, but it does bridge the gap between the standard webcam and the fancier telepresence robots on this list. For the time being, and at least until more workers migrate back to offices, this is a very solution for seamless video conferences from home.

    ZDNet Recommends More