More stories

  • in

    Nikon partnership nods to big ambitions for 4D sensing company

    Aeva
    A company that makes an innovative type of lidar that modulates frequency (by far the most commonly commercialized method is to modulate amplitude) is taking aim at the industrial automation space with a new partnership. Aeva is entering into a strategic partnership with Nikon that will bring micron-level measurement capabilities to the industrial automation and metrology spaces.Nikon is a big player in metrology and industrial automation markets, serving customers that include major global automotive OEMs and aerospace industries. For Aeva, a far newer player, that market position will help bring FM lidar technology to market far faster than going it alone.Frequency modulation is not a common approach among commercialized lidar developers. As I’ve written before, companies using AM lidar modulate the amplitude of pulsed waves from a spinning laser array and then calculate the time it takes for the light to bounce back. It uses the information to get a fix on objects in the sensing field, such as other cars or pedestrians.To date, over 95 percent of the $1.1 billion “lidar bubble” is invested in companies pursuing AM sensing. So it must be pretty rock solid, right?Not according to companies like Aeva, along with a small handful of other firms, modulate the frequency of the laser wave instead of the amplitude. The lasers don’t pulse, as AM lidar does. Instead, small frequency changes are made to a continuous wave. The sensor then measures Doppler effect, defined as an increase or decrease in the frequency of waves as the source and observer move toward or away from each other. According to advocates of doppler lidar, conventional AM lidar is highly vulnerable to interference from sunlight and other sensors. It’s also computationally intense and error-prone in the way it deduces the velocity of objects over multiple frames of data. AM lidar uses all kinds of computational tricks to determine the velocity of objects, which is made more complex by the high error rate caused by lighting inconsistencies and sun glare.The technology has obvious applications in autonomous driving systems, but Aeva’s ambitions are much broader, targeting the booming industrial automation sectors.

    “This marks a milestone for Aeva’s expansion strategy beyond autonomous driving applications. We’re excited to work closely with a leader like Nikon in an established market with massive growth potential as we accelerate our expansion into industrial applications, targeting product release in 2025,” says Soroush Salehian, Co-Founder and CEO at Aeva. “By leveraging our common core LiDAR chip architecture that we’ve already developed for automotive applications, we can bring industry leading costs to volume scale, which we believe has the potential to upend the growing industrial automation industry.”Ava was founded in 2017 by former Apple engineers Salehian and Mina Rezk, and their big appetite for a variety of sectors is reflected in the multidisciplinary team of engineers and operators the company has onboarded. Areas of possible application include consumer electronics, consumer health, industrial robotics, and security.”Our 4D LiDAR on chip technology has the capability to provide unparalleled performance through proprietary software on existing hardware,” says Rezk, Co-Founder and CTO at Aeva. “This solution will achieve measurements with micron-level accuracy and will unlock entirely new applications beyond autonomous driving. Nikon is a world leader when it comes to delivering high precision industrial solutions of the highest quality, and we’re thrilled to collaborate to bring our unique technology to industrial applications.” More

  • in

    Network effect: Strong robot gets 5G upgrade

    Sarcos
    As 5G rollouts quicken, we’re seeing the first hints of the new capabilities the network will bring to robots. The latest example comes by way of a just-announced collaboration between Sarcos Robotics, which makes robots that augment humans to enhance productivity and safety, and T-Mobile.The agreement will integrate T-Mobile 5G into the Sarcos Guardian XT highly dexterous mobile industrial robot. “We are proud to collaborate with T-Mobile and we’ve made great progress leveraging their 5G network to enable the remote viewing management system,” said Scott Hopper, Executive Vice President of Corporate and Business Development, Sarcos Robotics. “This is a significant first step and we’re eager to continue the development toward full 5G wireless connectivity that will unlock a variety of new capabilities, including remote teleoperation, as we prepare for commercial availability.”This is part of an evolving story about Sarcos’ plans for teleoperated systems. Last month I covered the rollout of the company’s SenSuit controller garment, which enables users to control the Guardian XT, which looks like a robot version of a human torso and arms, to accomplish precision tasks and perform work in unstructured environments, spaces that could soon include construction and mining. The SenSuit controller incorporates a headset and utilizes natural human movement as control inputs.But when it comes to delicate industrial tasks and operating a massively powerful robot remotely, the network is key. That’s where 5G comes in, and it’s a good illustration of where robotics is headed. The new collaboration begins with the integration of 5G to develop a remote viewing system powered by T-Mobile’s high bandwidth, low latency 5G network. In the next phase, the companies will include full 5G wireless integration to allow for seamless and near-instantaneous control of the XT.”The Sarcos Guardian XT robot requires a highly reliable, low latency 5G network that its human operators can count on,” said John Saw, EVP of Advanced & Emerging Technologies at T-Mobile. “5G was designed from the ground up for industrial applications such as this and we cannot wait to further collaborate with Sarcos as they develop the next big thing in industrial robotics.”Sarcos, which we’ve been tracking closely, is on a bit of a tear lately. The company recently announced that it will become publicly listed through a merger transaction with Rotor Acquisition Corp., a publicly-traded special purpose acquisition company. More

  • in

    Can AI improve your pickup lines?

    Can AI help people enhance their online dating game? Would you trust a computer with your digital pickup lines?A team at Medzino, a digital health and wellness clinic, had some fun prompting OpenAI’s GPT-3 language prediction model to generate dating advice for different situations. For good or ill, we’re seeing a lot of these “research” applications of GPT-3, which are decidedly unscientific but do gesture at some of novel uses of AI waiting for us just down the road. it’s also a good illustration of the severe limitations of a context-based text predictor that scrapes the internet to come up with the most probable answer.  The team surveyed over 700 singles to see how well the AI did with pickup lines and general dating tactics and decorum. The results were … interesting. One takeaway: Coffee is still king when asking someone out. A study of 5.5 million dating app users confirmed coffee is the most popular first date option. In the Medzino research, both women and men found lines generated by GPT-3 asking a prospective romantic interest out for coffee to be effective. Even with the safe bet of a house brew, however, the more presumptuous of the lines scored poorly.Things get a little more complex when relying on GPT-3 for advice on post hookup tactics. One of the AI’s suggestions for your post hookup text was a message indicating how much you want to see your partner again. That suggestion ranked low with surveyed female respondents, with only 25% of women thinking it was a good move. By contrast, 40% of men thought it was a good tactic. OpenAI has been careful to root out bias in its public-facing app, but it’s entirely possible the data set, which is essentially all the text on the internet, makes bias inevitable, as this disparity seems to indicate.The possibility that AI will help humans find new entry points into the philosophical questions that have long vexed us has long intrigued futurists. Whether soulmates exist or not rise to the level of Cartesian dualism, but the GPT-3’s answer is still interesting:”In my opinion, it is a myth that the right person will come along at the right time. You have to create the right situation for you and your partner. It may not be the same situation as your friends and your family, but it is what works for you. If you are happy, then you can be a better parent or friend to others.”

    Notably, only 8% of women and 10% of men agreed with all of that statement, though the majority of respondents agreed with most or some of it.And that, in a nutshell, is why these research projects, however entertaining they may be, probably aren’t all that illuminating. GPT-3 plays it right down the middle, prioritizing the most probable answer to given questions. The responses, therefore, should fall into the sweet spot of the bell curve for any survey. In other words, there’s little chance of a surprise when you ask GPT-3 a question. That consistency may be helpful for some, but as in all things, mastery in love ought to come with a flare for the unexpected. More

  • in

    Pizza vending machine looking for some dough

    Basil Street
    People love pizza. People love convenience. Boom, a pizza vending machine!Not so long ago that would have sounded like a joke, but kiosk concepts are proliferating amid a wave of investment in touch free food concepts. Basil Street, which raised $10 million last year, is turning to crowd funding to increase its distribution of Automated Pizza Kitchens.The company, which has received NSF and UL certification, plans to have about 50 APKs placed across the country by fall 2021 and aims to expand to up to 100 APKs by year end. Locations targeted for kiosk placement include universities, airports, and other high-traffic areas, further illustrating the growth potential and customer interest surrounding the technology.Of course, it has some competition. Piestro, which last year raised money via crowdfunding site StartEngine, makes a standalone, fully integrated cooking system and dispenser, creating an automated pizzeria that combines fresh ingredients and custom recipes to build what the company says are high-quality pizzas. The pizza wars (Domino’s vs. Pizza Hut, Papa John’s vs. Round Table) are about to take an odd turn into automated food delivery, a market that could be quite lucrative. After all, there are over four billion pizzas served in the U.S. annually, according to IBIS World. Depending on where you live, you may already be able to get a fresh-tossed salad from a robot named Sally and a really good pull of espresso from one of Cafe X’s robotic baristas. “Automated food kiosks are accepted globally as a viable option for meals on the go,” says Deglin Kenealy, CEO of Basil Street. “As the need for contact-free solutions rises in the U.S., we have successfully combined America’s favorite meal with patented technology to deliver restaurant-quality food at the touch of a button. We are excited to take this next step and engage our supporters to become a part of the pizza robotic community through participation in our crowdfunding efforts. With over $47 billion in revenue generated in the U.S. Pizza Industry, the opportunity to transform this market is here.”Following its raise last year, Basil Street completed a pilot program of their automated pizza kitchens (APKs) and received positive reviews from customers. The pilot consisted of five APKs in California, Texas, North Carolina, and Nevada. Patrons had the opportunity to experience the stand-alone kiosks’ signature pies featuring fresh ingredients, utilizing a proprietary cooking process that in approximately three-minutes delivers a brick oven-style pizza experience similar to those found in one’s local favorite pizzeria. 

    In case you’re wondering, the price of a 10-inch Italian style, thin-crust pizza will range between $4.95-$14.95. Basil Street is committing to using only fresh ingredients that are flash-frozen to preserve nutrients, flavor, and freshness before being cooked to order. It takes about three-minute for the pizzas to cook via a patented three-element non-microwave speed oven.  More

  • in

    It's time to standardize robotic surgery

    The global surgical robotics market is expanding rapidly and may soon be worth $120B. But is the medical training ecosystem ready for the shift to robot-assisted surgeries?As more surgeons use robots in the OR, the approach for training on them and using them needs to be standardized. The truth is that all surgeons aren’t approaching this innovative tech the same way. Standardized best practices are what set surgeons and patients up for success, and will help to make robotic surgery safer in the future. So how do we improve it?There are a handful of new challenges the surgical team faces with robots: how to collaborate, how to coordinate (both the physical setup and the tasks), and how to communicate. What’s needed is a concerted effort to make sure all surgeons are using the robots the way they were intended so surgery is efficient and effective. Two medtech startups that are leading the charge on this are Explorer Surgical, which is a digital playbook that walks every team member in surgery through the steps to be successful, and Osso VR, which trains surgeons using high-fidelity VR. I recently connected with Justin Barad, CEO and co-founder of Osso VR, and Dr. Alex Langerman, MD, SM, FACS and Co-founder of Explorer Surgical, about the future of robot assisted surgery and the critical need to standardize training.GN: What are some of the most difficult things for surgeons to adapt to when transitioning from traditional to robot-assisted surgery?Dr. Alex Langerman: Physicians are faced with multiple challenges when transitioning to robot-assisted surgery. Still, the most significant has to do with learning the complexities of integrating a new device into a surgical workflow and overcoming a learning curve to operate as an experienced team. 

    Robotic-assisted technology can be straightforward or very complex; there are many little things that a clinical team needs to learn when adapting to a new technique. For example, the placement of a robotic arm, the room set up, adjustment of the bed, and any registration needed for the patient and procedure. Aside from the technical setup, the complexities can also include customizing the physician’s interface and preferences for ‘must haves’ in the OR.  This preparation minimizes the potential for intraprocedural delays or disruptions. Secondly, training the surgical team is as important as training the physician as with any new device. It’s the physician’s responsibility to make sure the procedure goes well for the patient and that every team member in the room knows what their specific tasks are regarding the device and its use. A digital playbook with every step related to the procedure, specific to each role in the OR, can bring significant support to ensuring that nothing is overlooked.Justin Barad: Wow this is such a great question!  One of the most difficult things when switching to robotic surgery is that the workflow is significantly different.  It depends on the robot but using orthopedics as an example a typical joint replacement workflow will go: Patient positioning > Dissection/Approach > Bone resection > Implant Trialling > Final Implantation.Now let’s compare that with a robotic workflow: Robot setup and calibration > Patient positioning > Dissection/Approach > Registration > Planning > Robotic assisted Bone Resection > Trialing and Computer aided assessment > Final Implantation.All of the steps in bold are significantly different and even if you are doing the same surgery the various robots from different manufacturers have their own unique workflows.  Further complicating things is these skills and concepts are not commonly taught during formal training (Residency & Fellowship), so practicing surgeons often are coming in at a relatively novice state. Finally, one of the major advantages of robotics is that they are powered by software which means they can be updated and improved over time, but that poses a significant challenge. This means that from one day to the next the way that you perform a surgery can change significantly following an over the cloud update. Without a rapid way to train on demand, you can run into a situation where you can potentially not know how to advance in a given procedure despite familiarity with the system.  I’ve even heard reports of people calling “tech support” mid-surgery for this very reason.All that being said, robotics are an incredibly valuable and powerful tool that makes surgery more consistent and data driven which ultimately will drive significant value for global healthcare.GN: How much is the current perception of robot-assisted surgery shaped by misconceptions or improper preparation? Why?Justin Barad: I think on the patient side the perception of robotics is quite positive and there is accelerating demand to receive surgical care in a robotic manner if it is available. On the provider side I think there is more of a mixed opinion. Some providers feel that they can operate much faster with more traditional open techniques and view robots as “slowing them down” and being “too complicated.”  However, most surgeons who I’ve spoken to  who have overcome the significant learning curve recognize the value and repeatability of switching to robotic platforms, including the advantage that the sophistication of the technology is improving at an accelerating rate given software updates and hardware investments. One other challenge to the adoption of robotics has to do with the makeup of the surgical team. Robotic surgery requires much more coordination from the team in contrast with traditional techniques which are more surgeon driven. There are surgeons who consistently work with the same surgical teams so training and coordination doesn’t pose too much of a problem, however at many hospitals and surgery centers there is a very high level of team variability. One surgeon I spoke to recently told me he operates with 25 different surgical techs over the course of a month.  Without the ability to rapidly onboard additional team members, surgeons may be hesitant to constantly be in a situation where team members don’t have a great sense of how to execute the procedure properly.Dr. Alex Langerman: When robotic surgery was first introduced, the learning curve for adopting the technology had a substantial effect on the efficiency of the OR.  While there has been a significant effort from the industry to show that RAS can be better for the patient, there are misconceptions that it makes surgery easier for the physician.  The training of surgical teams typically happens when the device is delivered to the customer.  The time spent learning new technology can impact an OR’s efficiency because every team member has a role in the setup and preparation.  Some physicians may be hesitant to adopt new technology because they have heard about experiences where new technology was introduced, but it was such a bad experience with those initial cases, it was barely used.  Those initial experiences can be shaped by the clinical team and their preparation for getting ready to operate. Unless physicians have access to experienced, dedicated robotic nursing and scrub teams, they might never get past the slow end of the “getting ready to operate” learning curve.With a digital playbook like Explorer Live, each team member has their responsibilities mapped out before they ever do their first procedure.  It provides support throughout the entire surgery, helping them be more efficient in the learning process.  In addition, companies can provide real-time support and guidance with remote connectivity to someone dedicated to supporting cases or a peer considered an expert on the device.GN: How does communication and coordination change when a robot is in the mix?Justin Barad: As I mentioned above there are significantly more tasks for the surgical team to perform to successfully perform robotic surgery, especially for console operated robots where the surgeon is physically removed from the surgical site and relies on communication for troubleshooting and some repositioning of the equipment. For a seasoned team that works together frequently this can work quite well, however in highly variable environments such as the one mentioned above this can make the surgery extremely difficult to pull off without the surgeon and sometimes device representative running around trying to do everything themselves.Dr. Alex Langerman: In traditional surgery, the surgeon, assistant, and scrub are all right next to each other, and communication is limited for the rest of the OR. Access to the surgical field can be impacted in robotic-assisted surgeries where they need to make room for the device, or the surgeon is physically separated from the patient.  Sometimes they are working at a console across the room. This inhibits the natural verbal interactions and non-verbal communication that keeps a team working smoothly.To support efficient and effective communication and coordination in the OR, teams need to be on the same page. As the surgery progresses, everyone is working together without disruption to their workflow. With a digital platform, the physician can continue through surgery, knowing that the entire surgical team is working in tandem with a guide that is specific to their role.GN: How can surgeons become better prepared for the transition to robot-assisted procedures, and whose responsibility should that be?Justin Barad: The more training the better!  The only issue is surgeons have very little time and robots are difficult to transport and access for training purposes. In addition, surgeons need to make sure their team is always maintaining their proficiency so that they can set up and execute the procedures on a consistent basis. Virtual Reality provides an incredible opportunity to rapidly work your way up the learning curve anytime and anywhere given it’s portability.  It also serves as a great on-demand training tool for situations where you have new team members coming into the OR and you need to rapidly get them up to speed. This is backed by the evidence which shows that training with Osso VR improves surgical proficiency anywhere from 230-306% in level 1 randomized peer reviewed trials.  In addition, intraoperative remote guidance technologies also are an intriguing tool to further support smooth execution of these cases.Dr. Alex Langerman: Anytime a new technology is introduced, it has to demonstrate significant value for the patient for a physician to adopt a new way of doing a procedure and a hospital to make the financial commitment.  When a physician is transitioning to robot-assisted procedures, a learning curve is often associated with the adoption and integration into a new standard of care. In some cases, it can be significant. The responsibility falls on the manufacturer to support any training and education efforts on the proper use of new technology. Preparing their clinical team should also be a high priority. New technologies that offer simulated or virtual training have helped to provide physicians with exposure and practice environments, but it can’t replace having experience in the room.  Using a platform like Explorer Live can support and facilitate the connection with expertise on the technology and key opinion leaders for training, peer-to-peer engagements, and mentorship.  Providing these resources can help to create a solid foundation of unlimited access to resources that can help to support a shift in clinical practice. Companies that support ongoing engagement will be vital to increasing the adoption of a new technique and support generating evidence that changing the way physicians operate is in the patient’s best interest.GN: What are some of the most effective methods for training surgeons to properly use robotic technology?Justin Barad: Training on the robot is probably one of the best ways to learn but this is also the hardest to coordinate and has some of its own challenges.  Robots used for training see so much wear and tear they often break or don’t work properly which can make training difficult.  They are usually so large they are very hard and expensive to ship out to training.  In addition, there usually isn’t an easy way to objectively assess proficiency when using the real world equipment.  We are seeing more and more that in person training is being paired with some time of digital advanced training modality like virtual reality.  In this way you are able to rapidly work your way up the learning curve on your own time and then use valuable in person training time as “last mile training” rather than as introductory experiences.Dr. Alex Langerman: Physicians will always want to get hands-on experiences when considering using new technology. Still, often the practical experience doesn’t come until they are getting ready to do their first case. There are news training technologies that have made an impact in recent years that provide a simulated experience with haptic feedback. Like Osso VR, augmented and virtual reality platforms have enabled more physicians to have a realistic experience. Explorer Live complements training simulation by providing a platform to try the best practices that are shared.  For an OR team, a physician who has a comfort level with a procedure can bring their experience of procedural steps and support, creating efficiencies in the OR around setup, room configuration, and the use of supplies. Explorer Live also supports the ongoing efforts to keep an entire OR up to speed.  As their experience grows, physicians may take on more challenging cases with access to education and training content or the ability to remotely connect with peers to help minimize any downtime.GN: How can training technologies improve patient outcomes?Justin Barad: There is a groundbreaking study from Birkmeyer et al. published in the New England Journal of Medicine in 2013 titled: “Surgical Skill and Complication Rates after Bariatric Surgery.”  This study asked the question “How does surgical skill affect patient outcomes.”  What they found was illuminating and intuitive.  The more proficient the provider, the better the patient outcome, to the point where the higher skilled surgeons had a 5 times lower mortality rate than their lower skilled counterparts.  We are seeing some of this impact first hand with Osso VR.  Some of our users have been able to reduce their operating time by 50 percent (so from about 4 hours to 2 hours) which is incredibly compelling as we know generally more efficient operations will have better outcomes.  We are just starting to scratch the surface of this technology as it broadens its reach to the millions of HCPs who perform procedures around the world and the billions of patients they treat.Dr. Alex Langerman: Training technologies that improve communication and coordination among surgical teams and reduce learning curves can significantly impact patient outcomes.  Physicians may be more willing to adopt new technology sooner if their initial experience is positive, leading to broader adoption by physicians and providing more patient access to game-changing innovation. More

  • in

    GE made an earthworm robot

    Robots that dig underground are getting lots of development attention thanks to DARPA, the Pentagon’s research funding arm. The latest example? An earthworm from GE.The GE robot is part of DARPA’s Underminer program. According to the agency:DARPA has selected three performers to develop technologies and solutions for the Underminer program that would surpass current commercial drilling capabilities. Underminer aims to demonstrate the feasibility of rapidly constructing tactical tunnel networks to provide secure logistics infrastructure to pre-position supplies or resupply troops as they move through an area.DARPA is also rounding third base on its SubT (Subterranean) Challenge, which “seeks novel approaches to rapidly map, navigate, and search underground environments during time-sensitive combat operations or disaster response scenarios.” The final events for the virtual and systems challenges will take place in late September of this year.

    [embedded content]

    GE’s earthworm robot is bio-inspired, drawing inspiration from the wriggly worm, and like its prototype its soft, putting it in a class of robots that don’t have hard exterior bodies. The earthworm robot is powered by fluidic muscles and has undergone successful trials through a year-and-a-half long demonstration period.”Through this project, we have truly broken new ground in advancing autonomous and soft robotic designs,” Deepak Trivedi, a GE researcher leading the project, said. “By creating a smaller footprint that can navigate extreme turning radiuses, function autonomously, and reliably operate through rugged, extreme environments, we’re opening up a whole new world of potential applications that go well beyond commercially available technologies.” The prototype earthworm, which made a 10 cm diameter tunnel, autonomously dug underground at GE’s Niskayuna, NY, research campus, achieving a distance comparable to available trenchless digging machines.”The ability of GE’s robot to operate reliably in rugged, extreme environments is, to our knowledge, a first in soft robotic design,” said Trivedi.

    If a military-funded earthworm sounds terrifying, DARPA outlined the need for digging technologies in the run up to the SubT.As underground settings become increasingly relevant to global security and safety, innovative and enhanced technologies have the potential to disruptively and positively impact subterranean military and civilian operations. To explore these possibilities, DARPA has issued a Request for Information (RFI) to augment its understanding of state-of-the-art technologies that could enable future systems to rapidly map and navigate unknown complex subterranean environments to locate objects of interest, e.g., trapped survivors, without putting humans in harm’s way.The earthworm, for its part, has potential broad utility, including in inspection and repair tasks.”In the future, we want to enable deeper, in-situ inspection and repair capabilities that would enable more on-wing inspection and repairs or enable major power generation equipment like gas and steam turbines to be inspected and repaired without removing them from service for lengthy periods of time,” Trivedi said. “The advancements we have made on this project support key developments needed to make that possible.” More

  • in

    Work in these sectors? Here's how drones can help your bottom line

    Kespry
    Industrial drones are nothing new, but the growth curve and pace of adoption is pretty astounding. The adoption of industrial drone programs by industry is expected to increase at a 66.8% compound annual growth rate over the next year. 

    Industrial drones are being used in major industries like insurance, mining and aggregates, using cutting-edge technologies (AI, machine learning, and deep data analytics, to name a few) to drastically reduce the time workers spend gathering and analyzing data while increasing accuracy and positively impacting the bottom line. All of these working together result in a growing field impacting industrial work and forever changing how these industries operate on a daily basis globally: smart inspections.Krishnan Hariharan, CEO of Aerial Intelligence company Kespry and drone industry veteran, believes that there’s still much room for improvement in the drone industry. Kespry, the company he leads, is pioneering smart inspections by leveraging the power of AI, machine learning, and data visualization to conduct inspections that previously had to be done manually. I had the opportunity to connect with Hariharan about the growth of the inspection drone market and the reasons businesses across a variety of sectors might want to add drones to a growing automation technology portfolio.GN: What are the advantages to humans of inspection by drones and how can they help the bottom line?Krishnan Hariharan: There are several advantages of autonomous drone inspections, especially considering this method removes the need for manual inspections. The first is worker safety. Instead of manually climbing on stockpiles and roofs to get accurate measurements, workers can simply tap out a flight perimeter on an iPad and let the drone do the work, keeping them out of harm’s way. As an example, Edw. C Levy, a construction and facilities company, uses Kespry to conduct its site surveys. Without drone technology like Kespry’s, a lot of construction and facilities companies contract with third-party companies to conduct their site surveys. That opens up a great deal of risk exposure because it involves an unknown party operating their vehicle in an area unknown to them. They could get lost, they could have a vehicle malfunction, they could require assistance from your own team members — all of which could cost you time and money. Kespry eliminates those unknowns and greatly reduces risk, keeping people out of harm’s way.

    In addition, Smart Inspections positively impact the bottom line for businesses by saving both time and money. What used to take hours or days now takes mere minutes. After the drone collects the imagery and data, it is then sent to the Kespry Cloud, where any team member can immediately access the information, making data processing that much faster and more accurate. As an example, one of the largest insurers in the United States leverages Kespry technology to conduct roof inspections for insurance claims. Instead of an employee climbing on the roof, manually taking measurements, and then compiling the data for interpretation, Kespry’s drone does it all. A State Farm employee simply has to navigate the drone over the flight path, while the drone collects imagery, measurements and data, and sends all the information directly to the Kespry Cloud, where it can immediately be analyzed by anyone, anywhere. As a result, State Farm saves time, and therefore money, and can process insurance claims faster than ever before.Finally, because measurement isn’t done manually, there’s less room for error. Smart Inspections are getting accuracy to be near perfect. With Smart Inspections, businesses can stop focusing on minor tasks like data collection and start focusing on maximizing production efficiency, optimizing labor productivity, and reducing downtime and errors using a single, integrated, and secure data platform from field collection through detailed analytics.

    GN: Smart inspection is emerging as a key use case for drones and AI. What’s the current state of the market regarding smart inspection offerings?Krishnan Hariharan: Companies are still in the business of performing manual inspections for assets across various industries including roof inspections for construction and roofing, or stockpile inspections in the mining, aggregate industry, and heavy earth moving for construction. Luckily there is a better and much more efficient method: Smart Inspections. With the use of drone technology, cloud-based analytics and high-resolution imagery, industries such as mining and aggregates, insurance and industrials can now experience completely touchless surveys and inspections in half the time, while keeping employees safe and keeping organizations compliant to their respective industry standards. The ultimate value proposition for customers using Smart Inspection is to increase revenues and lower operations and maintenance costs.With this rapid adoption and because Smart Inspections can save organizations so much time and money while improving worker safety, they will soon be ubiquitous, and slow adopters or hold-outs will risk being outpaced by their competition.Kespry’s solution extracts business insights from aerial data collection techniques by leveraging high resolution imagery and their real-space situational context or coordinates. And, we believe Kespry is the only organization capable of solving for multiple industries because of an extensible platform and the investments we’ve made to improve it over the years.GN: Are smart inspection drones sector-agnostic, or will customization be required to leap from industries like pipeline inspection to crop inspection, for example? Krishnan Hariharan: A drone is a very powerful medium to collect a lot of data efficiently that makes it possible for companies to process and analyze that information. Second, sensors used for drones continue to improve making it possible to use a single drone-payload for different kinds of missions across multiple industries. However, workflows for different industries are typically different and how the data is used and processed is nuanced for different industries. Therefore, if drones can accurately fly and gather the right information over the designated area, a robust software platform (including AI and data analytics) should be smart to be able to do the rest. The software will need to be flexible to adapt to each industry, gather the correct data and process the images correctly. Kespry’s specialization and secret sauce is to efficiently automate the business workflow in an efficient and scalable fashion for multiple industries consistently. There will be some level of customization required for specific industries because of how the data is analyzed and processed. For example, asset classification for Oil Inspection is going to be a bit different than asset and inventory management for Mapping/Mining and Roofing. However, there’s also an opportunity to leverage many of the implementations across multiple industry verticals. For example, when Kespry performs defect and anomaly detection, our AI/ML models for cracks, water ponding, rust etc. can be easily reused and applied consistently.GN: How is Kespry innovating the space and what’s coming down the line?Krishnan Hariharan: Kespry is always staying on top of emerging technology to better serve its customers. Advancements in AI, ML and data analytics are allowing us to transmit data to our customers within minutes of it being collected so end-users can take action quickly. As a leader in the industry for years, Kespry takes key learning from evolving technology to further improve its platform, including the software, AI models, analytics and more, to adapt to any environment within the insurance/roofing, mining and aggregates, and industrial spaces. Currently we are working on expanding our compatibility with any drone model so that more customers can access our technology. Additionally, we are exploring the use of edge devices to process images faster so we can have high-resolution images to customers within moments. Finally, we are working towards expanding to offer smart inspections in the industrial space.  More

  • in

    A lidar dev kit that plugs-and-plays out of the box

    Seoul Robotics
    A foundational technology in autonomous vehicles, lidar is steadily making its way into a broader range of robots thanks to plummeting prices. Case in point, a company called Seoul Robotics just launched a ready-to-go, plug-and-play lidar perception system that can be deployed out of the box. Lidar, which was cost-prohibitive for most applications as little as five years ago, may be the key to unlocking a world in which robots take to the streets en masse. But for that to happen, developers need not only the hardware but the software designed for easy integration.”First and foremost, lidar sensors do not work without sophisticated perception software. The lidar industry is investing billions of dollars on sensors without even considering the software needed to interpret the data into actionable solutions,” says HanBin Lee, CEO of Seoul Robotics. “Voyage combines analytics and sensors to bring tangible solutions to market much faster.”The lidar market is on track to reach more than $3 billion by 2025. But the niche range of applications for lidar — and in particular the autonomous vehicle space — has confined product offerings specialized use cases. It’s been largely left to end users to develop underlying software architecture to deploy lidar sensors. It’s only recently that we’re beginning to see truly use-agnostic sensor and software suites, a development that has big implications for IoT and robotics.Seoul Robotics’ new offering is called Voyage. It provides centimeter-accurate 3D object detection, tracking, and classification in addition to volumetric profiling and motion prediction capabilities, regardless of lighting conditions, and can collect and process data from up to four sensors for seamless insights across the sensor coverage zones. As Voyage does not capture, show or store any biometric and otherwise identifying data, it aims to maximize the protection of people’s privacy when installed as part of various smart cities and security systems, signaling one range of potential uses.The development kit is equipped with the company’s proprietary software SENSR2, lidar sensors, and a computer for applications that range from retail to smart cities to security. The arrival of these cost-effective, use agnostic lidar platforms is important because it suggests capability acceleration for IoT and automation technologies, including autonomous mobile robots designed to operate outside of structured and semi-structured environments. More