More stories

  • in

    LiquidPiston engine now runs on hydrogen gas

    The developer of a line of advanced rotary diesel and multi-fuel internal combustion engines is expanding into the renewable energy game. LiquidPiston’s X-Engine, which we’ve covered previously and is helping the Air Force develop vertical takeoff and landing concepts, can now run on hydrogen gas. The successful use of hydrogen gas to power the X-Engine, which is a rotary engine but is distinct from the Wankel engines that have developed something of a poor reputation in commercialized applications, demonstrates that renewable energy is a possible pathway forward with internal combustion power plants, particularly in aviation where favorable power-to-weight ratios are paramount.But for its latest proof of concept the company stayed closer to the ground—much closer. To demonstrate the viability of hydrogen fuel powering the X-Engine, which has previously only been tested in the lab, the LiquidPiston team removed a go-kart’s traditional 39-pound engine and replaced it with the 4.5 lb X-Engine, which you can see in the embedded video.

    [embedded content]

    LiquidPiston’s pitch for its rotary engine is that gasoline engines are inefficient, diesel engines are big and heavy, and electric power/batteries weigh a lot compared to what they produce. LiquidPiston’s engines are 10x smaller and lighter than traditional diesel engines and increase efficiency by 30 percent. Interestingly, that efficiency and power-weight ratio makes these engines useful for generating onboard electricity to extend the capabilities of electric vehicles.This is particularly useful for concepts like Urban Air Mobility, sometimes called flying cars. There are ambitious projects to put test vehicles in the sky over major urban centers in the U.S. and Europe within the next few years. UAM combines state of the art propulsion and battery technologies with advances in robotics, machine vision, and AI, and the result could be a fundamental rethinking of how we navigate in and around cities.The problem is that electric vehicle technology, while offering advantages like noise reduction, have severe power density limitations compared to combustion engines. That’s where a small internal combustion engine, if it can be made to run clean and efficient, could be a game changer. By generating onboard electricity, the power storage needs of such vehicles would be significantly reduced while extending range and power. Hydrogen is the sixth fuel variation that has been shown to power the X-Engine along with gasoline, propane, kerosene, diesel, and jet A fuel. More

  • in

    The undersea robots driving offshore wind generation

    Wind farms are now a reality in the U.S., heralding a new chapter in the country’s sustainable energy production ambitions. But new technologies come with new challenges, and for offshore wind generation, inspection is one of the biggest.In much the same way as energy companies operate and maintain oil and gas subsea assets, wind farm cables, structural foundations, and all other components of the turbines need continuous monitoring and maintenance. That’s dangerous work for humans, but it’s a job tailor made for underwater robots and smart AI-powered analytics.Given the bright future and growing (albeit still small) footprint of offshore wind in the nation’s energy power generation infrastructure, I reached out to Harry Turner, a machine learning specialist for Vaarst, a business driving the future of marine robotics, to discuss how robots and machine learning are changing the game for energy creation.GN: Can you explain some of the challenges of undersea inspection, particularly for offshore wind turbines?Harry Turner: To build and maintain wind farm assets, you need a clear understanding of the subsea environment and the condition of your infrastructure. These assets include everything from the structures that turbines sit on, to the cabling that carries electricity back to the mainland. At these depths regular inspections are usually carried out with remotely-operated underwater vehicles (ROV). But the teams that pilot those ROVs and interpret the data they collect, work on large vessels which they live on for anything from two weeks to three months. And these vessels require large crews to run, use huge quantities of fuel and are incredibly expensive. Another challenge is capturing and managing the vast quantity of unique data required. The data volumes involved in this process are huge, think 4k video streamed continuously by more than 10 cameras for one to three months – plus positioning information, multibeam sonar data, and 20-30 other data streams, that update up to hundred times per second. It can also take many hundred hours to review and analyse video images collected. Manually interpreting potential risk factors and recognising changes in the seabed has, to date, only been done by placing tens of people offshore on each vessel to do this work.Finally, accurate underwater measurement is incredibly difficult, but also critically important. Often the original CAD data is unavailable for subsea assets plus there can be substantial marine growth or damage over time, so to be able to properly maintain and repair them, pinpoint measurement accuracy is key.

    GN: What technologies are currently used in seabed inspection? What are the limits of the current technologies and how does that impact adoption of green energy solutions?Harry Turner: Seabed surveys are carried out from vessels deploying sonars that map the seabed. For closer inspections, the majority of companies are using manually operated ROVs collecting video data. Each ROV needs at least two pilots to operate it. And then the data collected is inspected manually by an additional team. The more people you need, the bigger the ships you then require. This is not only expensive but obviously these ships have an environmental impact as well. The marine robotics industry is ripe for innovation and AI is undoubtedly going to change the landscape, by decarbonising marine operations with data-driven automation of marine robotics.GN: Please explain how Vaarst uses AI to aid undersea inspection. What’s new and novel about this approach?Harry Turner: For some time, AI has been lauded as a game-changer for many industries. It has huge potential in a number of applications, but right now, every industry is grappling with how to become more sustainable. It’s in this area that AI may help reap the best rewards. The future of marine robotics lies in using 3D computer vision and machine learning to help improve efficiency and ease the transition to greener, renewable energy sources, and ways of working in offshore environments.The use of robotics in the energy industry isn’t new – as far as industries go, they were relatively early adopters – but the use of more advanced technologies, such as simultaneous localisation and mapping (SLAM), machine learning, and increasingly autonomous ROVs, presents an opportunity that too few are seizing. By leveraging such technologies energy companies can reap significant benefits.There are three key areas Vaarst’s technology is making a significant impact:Firstly, ROVs are run by pilots who perform all the control tasks. Vaarst has built a platform which retrofits various layers of autonomy to ROVs. These layers go from advanced assistance to autonomous control. Supporting the operator to do the job safely.While an ROV would normally run on a predefined path that the operator would follow, the autonomy technology allows it to take the SLAM information and analyse “on the go”, presenting alternative options to the operator to complete its strategy whilst navigating obstacles, or course correcting for currents. The operator can then make informed, one touch decisions.By enabling autonomy, fewer pilots are needed, and they can be located on shore, in a supervisory role thereby eliminating the need for bigger vessels offshore.Secondly, Vaarst is innovating Computer Vision, that is to say, the way a computer sees. Vision is about giving understanding and context to images. To do this Vaarst has developed technology that captures 3D point clouds to create accurate images and accompanying measurements in real time. This allows the ROV to “orient itself” in its environment. Finally, Vaarst’s Machine Learning (ML) Platform processes video feeds in discrete frames. The platform can recognise key features and anomalies, automatically tag them, and grade them according to confidence levels – enabling human operators to check the work and confirm the findings, which vastly expedites the process. This again, can be completed onshore thus removing people from hazardous environments and reducing vessel sizes for a positive environmental impact.For example, in the past pipeline surveys (that is following the length of a pipeline to check its condition) may have taken hundreds of hours and meant taking additional crew members on survey vessels to carry out this time consuming, manual work. Vaarst’s technology makes it possible to reduce not only the time needed to carry out this task, but the need to take these crew members on the vessels at all, enabling the work to be done from onshore.GN: Who are Vaarst’s customers (generally or specifically, either fine)? What’s the pitch to prospective customers in terms of advantages, capability, and cost savings?Harry Turner: We work with a number of leading energy suppliers on some of the biggest renewable projects in Europe, from the energy operators themselves through to the many companies operating within the supply chain. All see the huge benefits that can be brought through future-proofing their data sets for ongoing analysis, and of being able to store and maintain their data digitally. The immense cost savings seen from reduced rework, and large time savings in data collection and analysis are appealing. As are the reduced days at sea, which can afford dramatic cost savings, reduced CO2 emissions and the removal of humans from hazardous conditions.Improved life/work balance is also key. Younger generations are choosing lifestyles that often do not match the demands of pursuing a career offshore on vessels, so enabling work to be performed onshore is a key way to attract and retain talent. Equally, the gamification of technology software holds appeal to this generation and takes advantage of their skillsets.GN: What lessons are being learned about undersea inspection utilizing your process? What other applications or opportunities might your technology open up?Harry Turner: The main lesson being learnt is that there is an effective and practical way to streamline what has been a cumbersome and expensive process up until now. The energy sector is ready for innovation, but it needs to permeate the entire maintenance and inspection supply chain.As we continue to build and innovate, there is no doubt that the lessons we learn in marine robotics will drive innovation in AI into new and exciting territories. The vision and autonomy technology we have designed along with our analysis platforms can be applied to any robotics, not just undersea ROVs. It can be utilised in any environment, from the deepest sea trenches to hostile environments such as nuclear facilities, in air using drones or even in interplanetary discovery! More

  • in

    Robot soda jerk from company that brought you Flippy

    Miso Robotics
    Robots are coming for fast food, and Miso Robotics is angling to speed up adoption. The company behind the Flippy fry cook robot is moving into beverages with a robotic beverage dispenser, part of a new partnership to bring another robot to your local burger joint.

    Miso Robotics and Lancer Worldwide, a global beverage dispenser manufacturer are rolling out what’s described as an intelligence backed, automated beverage dispenser.”Lancer has consistently supplied the market with dependable products for more than 50 years and there was no question when it came time to decide who to partner with to create an automated beverage dispenser,” said Jake Brewer, Chief Strategy Officer of Miso Robotics.Fast food, known more formally as the quick-service restaurant (QSR) industry, has been booming during the pandemic as dine-in options closed or became less popular with diners. Labor shortages, along with rising wages in a strong labor market, have prompted restaurant operators to explore new efficiencies. That need is coinciding with the arrival of automation technologies in fast food. The pizza sector, interestingly, has been a hotbed of automation with operators like Little Caesars developing automation solutions. In the burger space, Miso has already made strides with its Flippy ROAR robot, an AI-enhanced robot that can cook several items to perfection. Automation, as it turns out, is perfect for the rising trend of drive-through service. While 60-70% of certain QSR sales came from the drive-thru lane pre-pandemic, as COVID spread, major QSRs experienced a jump as high as 90%, according to Miso. Delivery and drive-thru orders have increased the need for speed just as demand is booming, and restaurants are having trouble keeping pace. Total average drive-thru times slowed down by 29.8 seconds last year, according to Miso.Miso Robotics and Lancer saw the need for beverage automation in the commercial kitchen.

    “Order fulfillment is a major factor for customer satisfaction and operators can’t afford to have a beverage left behind when a delivery driver or customer visits,” says Brewer. “We are extremely excited to create a product that will not only make the lives of those working in commercial kitchens better, but will be a game changer for the industry as a whole to deliver a world-class customer experience.”The new automated beverage dispenser will automatically pour drinks and advance beverages to be grabbed by restaurant workers for order fulfillment. It integrates with the point of sales (POS) system and comes equipped with a guided workflow for employees to ensure accurate order completion synced with driver and customer arrival times. Efficiency of course drives the workflow and the entire process is timed to complete as close as possible with the time the meal order is ready for hand off to the customer or delivery driver.”Lancer has always stood out as a trusted global beverage dispenser manufacturer because we put our customer and partner needs at the forefront of every project we undertake,” said Brad Davis, Director of Applied Technologies for Lancer Worldwide. “The quick service brands we work with every day are well aware of what their challenges are – they know they need more efficiency, and they know there is new technology out there that could make it possible. Miso Robotics will help us bring all the right pieces together for an innovative design that makes automation, connectivity, and intelligence possible for operators. We are excited to collaborate with them to bring this concept to life, and into the hands of operators around the world.”Just more proof that the robots are coming. It all sounds pretty tasty. More

  • in

    Toyota working on robots for complex situations – Like household chores

    Toyota Research Institute
    Robots have come a long way but still face incredible challenges when it comes to tasks and environments that seem pretty run-of-the-mill for humans. That’s what makes the below video from Toyota Research Institute (TRI), which demonstrates robots solving complex tasks in unstructured home environments so compelling.”Our goal is to build robotic capabilities that amplify, not replace, human abilities,” said Max Bajracharya, vice president of robotics at TRI. “Training robots to understand how to operate in home environments poses special challenges because of the diversity and complexity of our homes where small tasks can add up to big challenges.”The new video, whose release coincided with National Selfie Day, is a little silly, but the advances are meaningful in the field. TRI’s roboticists are here demonstrating that they’ve trained robots to understand and operate in situations that utterly confound most other automation systems, particularly when it comes to recognizing and responding to transparent and reflective surfaces, a major hurdle for machine vision. As a TRI statement explains, since most robots are programmed to react to the objects and geometry in front of them without considering the context of the situation, they are easily fooled by a glass table, shiny toaster or transparent cup.

    [embedded content]

    To date, that’s kept robots largely confined to strict task designations most commonly performed in predictable environments like factories and warehouses. Bringing robots out into the real world — which is happening most dramatically right now in the world of autonomous vehicles — is far more complicated, requiring these complex and potentially dangerous systems to constantly account for and confront the unexpected, which carries massive risks of failure.”To overcome this, TRI roboticists developed a novel training method to perceive the 3D geometry of the scene while also detecting objects and surfaces,” continued Bajracharya.  “This combination enables researchers to use large amounts of synthetic data to train the system.”  Using synthetic data also alleviates the need for time-consuming, expensive, or impractical data collection and labeling.The research is part of TRI’s mission to develop active vehicle safety and automated driving technologies, robotics, and other human amplification technology. Veteran roboticist Dr Gill Pratt leads TRI. More

  • in

    Merlin teaming up with Dynamic Aviation to bring autonomy to 55-aircraft fleet

    Shutterstock
    A Boston-based company that brings autonomy to existing fixed-wing aircraft has come out of stealth to announce a new partnership. Merlin Labs is teaming up with Dynamic Aviation, the owner of the world’s largest private King Air fleet, to bring autonomy to 55 airplanes. Merlin is also announcing $25 million in funding from GV (formerly Google Ventures). This places Merlin into a small but active pack of companies scrambling to bring autonomy to aviation.”We’re proud to partner with Dynamic to begin the process of moving autonomy from the lab and to the market,” said Matthew George, Merlin co-founder and CEO. “This deal represents a major commercial milestone as well as Merlin’s commitment to supporting larger and more complex aircraft.”Unmanned drones have now long been a part of the aerial landscape, but drones aren’t the only kind of self-driving aerial vehicle regulators have been dealing with. It may seem a foregone conclusion that self-driving cars are on the way, but we’ve heard less about autonomous aircraft, as I’ve written. That’s changing. Following recent crashes related to failures in autonomous systems onboard Boeing’s 737MAX, you might expect consumer confidence to have eroded significantly. However, a recent ANSYS study found that wasn’t the case. In fact, 70% of consumers say they are ready to fly in autonomous aircraft in their lifetime. Merlin’s autonomy platform is aircraft-agnostic, focuses on onboard autonomy rather than remote piloting, and is being integrated into a wide variety of public- and private-sector aircraft. The Dynamic Aviation partnership marks the first public implementation of Merlin’s technology. According to a statement, the performance of King Air aircraft with Merlin’s technology will support a wide range of public and private-sector missions. The first aircraft from the partnership is currently in flight trials in Mojave. “We are honored to partner with Merlin by leveraging this leading-edge technology in an operational platform,” said Michael Stoltzfus, Dynamic Aviation CEO. “We look forward to serving alongside Merlin to create extraordinary value for customers around the world.” More

  • in

    Mission critical: How to map a forest

    It’s wild fire season in the West, and protecting the region’s forests is an urgent priority. LiDAR may soon play an important role.That’s because forest mapping is vital for forest management and ecosystem maintenance yet it has long been difficult to achieve and mostly done manually. Drawing on its established LiDAR expertise in other industries, spatial intelligence company Outsight has launched a new solution that utilizes LiDAR to automatically generate a 360° 3D map of a forest in real time and also allows on-site operators to collect data on each tree and digitally tag them, day or night.”With Outsight, we’re able to complete our surveys of the forest three times faster”, says Philippe Nolet, forestry professor at Université du Québec en Outaouais in Gatineau, Canada, where he conducts forestry monitoring research. “Then, when we’re back in the office, we have a detailed inventory of the plot with all our notes automatically tagged to each tree, saving us a huge amount of time.”The LiDAR solution is based on an easy-to-carry box that integrates the dedicated software. Using a tablet, users can collect data on the forest and each tree, individually and exhaustively, observations like exact position, characteristics, species, and presence of insects. Back at the office, they have access to a detailed inventory of the plot with all their notes automatically tagged to each tree. GPS geolocation allows operators to overlap maps.Equipped with this data, operators can arrive at data-driven forestry management tactics, including in the run-up to wildfire seasons. Because forestry management budgets are often minuscule compared to the task of managing vast plots of forest land, the ability to make confident decisions helps extend resources.Outsight’s solution is already being used by Hong Kong-based Insight Robotics, a leader in the Forestry Risk Management sector. Insight uses the LiDAR system to complete aerial surveys to better manage forests and plantations. It can also monitor the reactions of the forest ecosystem to climate change, biodiversity, deforestation, dead trees, and potentially prevent the spread of disease in trees. More

  • in

    Robot 'rosetta stone' will unify the bots

    Robotics, once a fractured field of scrappy tech startups, is starting to come of age. The latest proof is a set of interoperability standards that will allow Autonomous Mobile Robots (AMRs) from leading vendors to integrate and work together in settings like factories, warehouses, and ecommerce fulfillment centers. 

    MassRobotics, an independent non-profit, recently released the MassRobotics Interoperability Standard to allow units from competing automation marques to seamlessly interact. Initial participating vendors include Vecna Robotics, 6 River Systems, Waypoint Robotics, Locus Robotics, Seegrid, MiR, Autoguide Mobile Robots, Third Wave Automation, and Open Robotics Foundation, all leaders in the AMR space.”The release of version 1.0 of the MassRobotics Interoperability Standard is a crucial milestone for the industry,” said Daniel Theobald, CEO of Vecna Robotics and co-founder of MassRobotics. “It’s this pre-competitive collaboration and combined thinking from the greatest minds in the field that drive the sector forward exponentially faster than any one vendor could otherwise.”In other words, the thinking here is that a rising tide will lift all ships. There’s always been a strain of collaborative collegiality in the industry, which is tight knit and largely fed on the engineering side by a handful of powerhouse robotics grad programs and storied development labs. Many robotics companies utilize the open source Robot Operating System (ROS), which lives under the stewardship of Open Robotics.But to be sure, a big part of the willingness to collaborate is the surging demand for automation attributed to the unrestrained rise of ecommerce and the corresponding expectation of fast fulfillment. The global AMR and Automated Guided Vehicle (AGV) market is expected to reach $14 billion by 2026, with more than 270 vendors leading the manufacturing and logistics space, according to Logistic IQ. AMR adoption is growing with a CAGR of roughly 45 percent between 2020 and 2026.In that environment, it makes sense for competing vendors to build in interoperability. With logistics companies expanding and already benefiting from the flexibility afforded by the current spate of AMRs, which can be integrated into existing operations with minimal downtimes, a paradigm in which buyers are locked into a specific automation manufacturer limits  growth potential across the sector. An interoperable paradigm, by contrast, bolsters the case for automation among potential customers and potentially gives competing automation manufacturers multiple bites at the apple. A warehouse that already uses pick-and-place machines from Brand A can now buy integrate AMRs from Brand B into the same operation. The integration is also safer, as the systems can share information, something that previously wasn’t possible.This all came together fairly quickly during the pandemic, corresponding to a major surge in ecommerce demand — the MassRobotics AMR Interoperability Working Group was formed in 2020. The group’s newly issued standard allows robots of different types to share status information and operational conventions, or “rules of the road,” so they can work together more cohesively on a warehouse or factory floor. The standard also enables the creation of operational dashboards so managers can gain insights into fleet productivity and resource utilization.

    “Functional and practical standards are a critical next step for robotic automation,” said Tom Ryden, executive director, MassRobotics. “Our AMR Interoperability Working Group has diligently focused on development and testing of these standards, which are needed now, and we fully expect will evolve as the robotics industry and end-user companies implement them. We encourage buyers to begin looking for the MassRobotics Interoperability Standard compliance badge when making purchasing decisions.”In part, the effort was driven by customers operating major shipping and distribution centers, which by necessity have cobbled together automation systems from multiple vendors to provide for a range of applications. “Support for this effort has been broad, and we are indebted to numerous companies and individuals for donating so much time and expertise to the development of this standard,” said Theobald. “This important technology lays the groundwork for future innovation and concrete value for customers worldwide.”The first use case for the new interoperability standards will be trialed at a FedEx facility where AMRs from Waypoint Robotics, Vecna Robotics, and others will be operating in the same production area.”I applaud the Working Group for their efforts and dedication in laying out these first steps toward AMR interoperability. The diversity of the team shows that the industry can work together in finding solutions around this issue,” said Aaron Prather, senior advisor, FedEx. “Our interoperability validation in Memphis later this year will be a great real-world application of Version 1.0’s capabilities and will help to provide feedback to the Working Group to potentially demonstrate what future steps may need to be taken to make further improvements.” More

  • in

    Surgery digitized: Telesurgery becoming a reality

    There’s been a lot of talk around the topic of telesurgery and how far we are from this being a feasible reality. CEO of Asensus Surgical Anthony Fernando says this future is possible through 5G but this infrastructure has to be available everywhere. Moreover, the fundamentals of robotic-assisted surgical practices need to be widespread before we can progress further. 

    Companies like Asensus have taken steps to digitize the interface between the surgeon and patient through “performance-guided surgery”—the convergence of surgical technology and augmented intelligence. Augmented intelligence enables a robotic-assisted platform to perceive (computer vision), learn (machine learning), and assist (clinical intelligence) in surgery—providing a true digital surgical assistant for the first time. So what does that mean for telesurgery, which is beginning to emerge as a realistic concept? I connected with Anthony Fernando, CEO and President, Asensus Surgical, to find out.GN: What have been the primary hurdles (technological, regulatory, and from a market readiness standpoint) to practical telesurgery?Anthony Fernando: Before we delve into practical telesurgery, let’s first take a look at the current surgical landscape to provide context on the evolution of surgery and how we can achieve telesurgery. Currently, approximately, 40% of surgeries are being done open (invasive), 50% of surgeries being done laparoscopically (less invasive, but harder for the surgeon), and 3-5 % are being done robotically (which yields an unquantified improvement over laparoscopy). So, of the three types of surgery, laparoscopy is most common, with many trained surgeons and strong patient outcomes.By augmenting laparoscopy with some of the benefits of robotics, effectively called Digital Laparoscopy, surgeons and patients can experience the robotic benefits while continuing to leverage their laparoscopic skills.In order to enable telesurgery, the interface between the surgeon and the patient needs to be digitized and the Asensus Surgical’s Senhance system has digitized the interface between the surgeon console and the patient side robotic manipulators with an ethernet style communication interface. In addition, the Senhance system’s Intelligent Surgical UnitTM (ISUTM), is the world’s first and only augmented intelligence and machine vision capable surgical system approved by the FDA for use in robotic-assisted surgery.

    So practical telesurgery can be achieved through current Senhance technology, and 5G will allow that, given high bandwidth and low latency, but you need true 5G. It’s not everywhere, in fact it is only in a fraction of US cities. Once 5G infrastructure is widespread, the conversation about telesurgery will be more realistic and we will have to overcome the regulatory barriers in addition as well.  Moreover, the fundamentals of robotic-assisted surgical practices need to be widespread before we can progress further. GN: Practically, what will telesurgery look like in its early stages with respect to types of procedures, necessary personnel and infrastructure, etc.? What would the benefits be of widespread telesurgery?Anthony Fernando: Surgery today is inconsistent. Surgeons of all skill levels, experience, and training perform similar procedures, but have vastly different outcomes. The Journal of Patient Safety estimates that there are over 400,000 U.S. deaths that occur yearly due to avoidable complications arising from medical errors. This accounts for roughly one-sixth of all deaths in the U.S. each year. Technology-assistance surgery vastly reduces avoidable complications by mitigating surgical variability.With a broader, more robust 5G network, widespread telesurgery has the potential to unlock advanced surgeon training, enhanced surgical collaboration, increased efficiency, and the ability to provide healthcare to remote and underserved areas.As I see it, initially telesurgery will occur inside of a hospital where one surgeon sitting in one room performing 2 or 3 surgeries in different operating rooms in parallel while the support staff in each room assisting the surgeon. This could then be at a hospital system level and could expand to a city, state and finally intercontinentally. In a similar fashion a second surgeon or trainee could join remotely and assist as well.GN: How will 5G support or enable the rollout of telesurgery technologies?Anthony Fernando: True 5G technology is necessary for widespread adoption of telesurgery. It’s the high bandwidth of 5G, low latency and attainment of a fast enough internet connection that will permit telepresence in real time and allow the surgeons to effectively work on the patient as if they were in the same room. Large-scale adoption of this could revolutionize healthcare and surgical treatments around the globe – especially in small hospitals and developing areas that don’t have as much access to top notch healthcare.  Coupled with 5G, robotics provide invaluable assistance, allowing procedures to be performed less invasively, reducing complications and delivery times. GN: Robots are being utilized more and more for a growing variety of surgical techniques. Can you explain how the current applications, including your company’s technology, are paving the way for practical telesurgery?Anthony Fernando: Next-level technology completely changes the idea of what’s possible. As technology enhances and changes the world we live in, we’re able to make inroads in a new era of surgery reimagined. Moving beyond inefficiency, unpredictability and outdated technology in the operating room is a new surgical standard.The digital interface between the surgeon and patient is the key to unlocking telesurgery.Asensus is successfully digitizing surgery and building machine learning algorithms and AI that can enable the future of surgery. For instance, the ISU unlocks the power of computer learning to recognize anatomy, leverages image analytics for the first 3D virtual measurement capability in surgery, and harnesses the power of a virtual assistant to facilitate certain procedures in tandem with the surgeon.The ISU also enables computer vision capabilities for the first time in surgery to make for a smarter surgical decision process. This means this technology records an image and applies intelligent algorithms to enhance the surgeon’s ability to meaningfully use information from the surgical field in real-time.Asensus also offers a telemonitoring platform called Senhance Connect that brings surgical peers together, a feature that became increasingly important during the COVID-19 pandemic. Senhance Connect allows surgical peers from around the world to remotely observe a surgical case being conducted on Asensus’ Senhance Surgical System via cameras and communicate with an expert surgeon about the most advantageous practices. For example, a surgeon can benefit from the expertise of a colleague who specializes in a certain operation.GN: Augmenting human capabilities is an important function of surgical robotics. Do you expect human surgeons to be phased out for some types of procedures in the future? What kind of timescale are we talking about?Anthony Fernando: If you think about good surgery, it’s an art. So digital robotics only enhances and elevates a surgeon’s abilities, but by no means replaces the surgeon. But technology should not just be for the elite. Robotics, AI and machine learning are also bridging any lapse in technical skill and creating an “equal playing field” of surgical expertise across hospital facilities. By providing wider access to expert surgeons via telesurgery, these hospitals can leverage AI-acquired surgeon data to improve ongoing training, providing greater consistency, safety and satisfaction of patients. Our goal is to create a digital twin of a surgeon who can always work alongside a surgeon with the intent of taking the best knowledge and best practices from everywhere and enabling it to be leveraged anywhere. More