More stories

  • in

    Automation nation: 9 robotics predictions for 2021

    The pandemic has offered challenges and a major opportunity to robotics firms in the logistics and grocery spaces. Unforeseen stresses to supply chains and runs on products have emphasized the need for greater supply chain efficiencies. Workforce constraints due to safety protocols and illness have also hammered various sectors.
    The lessons of 2020 can help us read the tea leaves for the priorities and trends in the robotics sector in 2021. Predictions are always to be taken with a grain of salt, but this year’s batch come with the benefit of a lot of hindsight and hand wringing.
    To explore the year in robotics, I connected with Tim Rowland, CEO of Badger Technologies. A product division of Jabil, a Badger is a manufacturing solutions provider that delivers comprehensive design, manufacturing, supply chain, and product management services as well as retail automation solutions that feature autonomous robots. That makes Rowland well positioned to offer a vantage on 2021 in some of the sectors most impacted by COVID-19 but also most primed for evolution.
    1) Expect robots to pinpoint exact product locations
    “Autonomous robots took on more expansive roles in stores and warehouses during the pandemic,” says Rowland, “which is expected to gain momentum in 2021. Data-collecting robots shared real-time inventory updates and accurate product location data with mobile shopping apps, online order pickers and curbside pickup services along with in-store shoppers and employees.”
    That’s especially key in large retail environments, with hundreds of thousands of items, where the ability to pinpoint products is a major productivity booster. Walmart recently cut its contract with robotic shelf scanning company Bossa Nova, but Rowland believes the future is bright for the technology category.
    2) Multipurpose robots measure up as mavens of multitasking

    Heretofore, automation solutions have largely been task-specific. That could be a thing of the past, according to Rowland.
    “Autonomous robots can easily handle different duties, often referred to as ‘payloads,’ which are programmed to address varying requirements, including but not limited to, inventory management, hazard detection, security checks, surface disinfectants, etc. In the future, retailers will have increased options for mixing/matching automated workflows to meet specific operational needs.”
    3) COVID-19 proved there’s a limited supply in the world
    Remember running out of toilet paper? So do retailers and manufacturers, and it was a major wake up call. 
    “Major restock runs and a surge in online shopping quickly depleted shelf inventory, especially during the run on paper products early in the pandemic,” says Rowland. 
    That’s could be a new reality throughout the 2020s with a warming planet and increasing unpredictability. So what’s the solution? One thing that’s clear is automation will play a huge role. 
    “Ensuring adequate supplies of “high flyers” requires integration of shelf-level data with backroom, warehouse and supply chain information. To avoid visible holes on shelves, retailers will increase the use of robotics to correlate inventory data with POS, warehouse management and order management systems.”
    4) Independent, regional grocers adopt robots with increased agility and speed
    I’ve been writing about the migration of robots to mid- and even small-sized business for the better part of a decade, and grocery stores might be the crowning case-in-point. As I wrote earlier this year, even bodegas are now able to use automation technologies that debuted just a couple years ago in major chains. 
    “Most grocers test any technology they bring into their stores before widespread deployments,” says Rowland. “In 2020, independents like Woodman’s Markets, increased their deployments of in-store robots while larger entities like Walmart scaled back. Not only do independent, regional grocers have greater freedom and agility to trial new technologies, they’re also keen to automate wherever possible.”
    5) Issues with labor shortages will continue
    It may be hard to remember this far back, but we entered 2020 with a tight labor market. Even with all the COVID-related layoffs, it’s not clear that’s going away.
    “As the labor market continues to tighten, retailers increasingly will look for ways to automate previously manual, mundane tasks,” says Rowland, perhaps touching a nerve for automation alarmists. “Multipurpose robots shine when it comes to oft-dreaded shelf scans, performing scans in hours instead of days and with up to 95% accuracy. Let’s face it, no employee wants to perform mind-numbing, tedious tasks, so let the robots do it—they’ll never get bored or distracted.”
    6) Data-driven insights necessitate seamless integration 
    “The more comfortable retailers become with autonomous robots roaming store aisles safely alongside shoppers and employees, the faster they can redirect attention to what matters most—the ability to elevate customer satisfaction while increasing store revenue and profitability.”
    What we’re talking about here is store scanning robots. This past year saw Bossa Nova lose a contract to Walmart that had been the sector’s favorite case study, but that shouldn’t tarnish the broader outlook or need for the technology. 
    “To accomplish this, however, retailers need to connect the dots between shelf-scanning results and corresponding department, category, vendor, sell-through and pricing data.”
    7) Sensors, drones and fixed cameras become the ideal robot accessory
    Can somebody say technology convergence? The story of robotics over the last decade has been one of falling sensor prices and rapid technology convergence. Will that change in the year ahead? In a word: Nope.
    “In 2021, it’s highly likely that autonomous robots will be teamed with other in-store technologies, such as fixed cameras, drones and all types of sensors, to enhance data-collection capabilities. It’s all about attaining real-time visibility, especially in those hard to reach, hard to see places. A bevy of sensors will enhance a robot’s “sensing” abilities, including detection of certain gases emitted when foods lose freshness or are stored improperly.”
    8) 5G will have a profound impact
    “Investments—of substantial size—in the latest wireless technology are going to pay off in many aspects of retail as it promises to unleash new experiences. Mega-trending 5G is poised to deliver unheralded speed and bandwidth to ensure shared visibility of retail data without impacting other in-store network operations. Expect 5G to go from pilot to production phase throughout the retail industry.”
    9) The M&A scene will remain active
    There’s an interesting financial story behind this technology story, and it has to do with the fact that grocery chains are rapidly merging to compete with Amazon and Walmart. That’s creating an interesting need for data, which automation solutions can provide in a scalable way.
    “An increased frequency of mergers and acquisitions in the grocery sector will remain in the forecast. This is exemplified by the recent news of Ahold Delhaize acquiring FreshDirect along with HelloFresh’s acquisition of rival Factor75. Meanwhile, Food Lion secured FTC clearance to acquire 62 Bi-Lo/Harveys supermarket stores. As M&A activity continues, so does the need to integrate disparate data from different legacy systems to ensure end-to-end operational visibility.” More

  • in

    Expressive robotics is breathing “life” into machines

    As a pedestrian, you’re used to interacting with traffic, but you often rely on human interaction – like hand gestures, eye contact and body language – to navigate it safely. But as driverless vehicles edge closer to reality on public roads, humans are faced with something that’s still foreign to the general population: reading the intentions of robots and communicating their own intentions to machines. 
    To better understand the communication between humans and automated robots, and ultimately build trust between pedestrians and driverless vehicles, Motional (a driverless vehicle company created by Hyundai Motor Group and Aptiv) is adopting principles from a budding field known as Expressive Robotics — the study of how robots can respond to a scenario in the same way that we expect a person might.  
    Paul Schmitt, Motional’s Chief Engineer, and his team are researching the biological aspects of how humans interact with vehicles to make riders more comfortable with self-driving cars. By using VR, as well as taking cues from Disney’s Principles of Animation, the team’s goal is to make this human-robot interaction simple, familiar, and intuitive. 
    [embedded content]
    I reached out to Schmitt to help explain this emerging field and how our very human tendencies can help autonomous vehicles operate safer and with less awkwardness for the pedestrians that interact with them.
    Can you explain what expressive robotics is as a field and why it’s important for human-machine interactions in an increasingly automated world?
    Sure, imagine this. As you’re about to cross the street, you turn to see an approaching vehicle. There is something different about this vehicle. Maybe it isn’t driving like other vehicles. Maybe it isn’t behaving like other vehicles. And as it approaches, there is something else. The inside seems hollow somehow. Wait. The car is empty. There isn’t anyone behind the wheel. How would you behave? What would you do? How would you feel?
    For pedestrians to feel comfortable interacting with driverless cars, the car’s behavior ideally aligns with the expectations. In other words, the car would signal its actions and intentions in ways that people intuitively understand. 

    To facilitate and optimize this human-car communication, Motional is developing and trialing principles from a budding field known as Expressive Robotics — or more simply put, the study and practice of humanizing robots.
    As a pedestrian, we are used to interacting with traffic, but many of us rely on human signals/interactions — such as hand gestures, eye contact, body language, or the typical behavior of a human-driven car — to navigate roads safely. Naturally, those human queues don’t exist in a driverless car. A self-driving car would safely pull up to a cross walk, but the driver’s seat would be empty. You wouldn’t be able to make eye contact before crossing the street. 
    Now take this scenario and multiply it by the number of pedestrians a single driverless vehicle may encounter in a day, a week or a year. Then multiply that again by the dozens, then hundreds, then thousands — then millions — of driverless vehicles that are expected to be on the roads in our lifetime. It’s an immense number of interactions for which pedestrians don’t yet have a clear set of internalized instructions, and that’s ultimately because we’re asking humans to do something that’s still foreign: communicate with robots.
    At Motional, we’re working to solve this. Our goal is to make driverless vehicles a safe, reliable, and accessible reality, and central to our mission is ensuring consumers understand how our vehicles fit into their communities, and feel safe in their presence. Ultimately, we’re working to make this human-robot interaction simple, familiar, and intuitive.
    How is Motional drawing on the field to make human/car interactions more seamless.
    Over the last year, Motional has researched real pedestrians in thousands of interactions with autonomous vehicles using VR. During testing, participants are immersed in a realistic world, an intersection with both human-driven and driverless cars on the road. The pedestrian is simply asked to cross the street when comfortable, then this is repeated for a variety of scenarios.  Each scenario is crafted from Expressive Robotics principles and consists of a vehicle expressing its intent to stop (or not) via expressive (human inspired) motions, lights, and/or sounds. Then we ask participants to rate how they felt about each encounter. 
    Our initial results suggest that pedestrians respond positively when driverless vehicles exhibit expressive braking and early stopping — and we are researching the effectiveness of these and other methods over repeated exposure. We plan to take the most promising signals — the signals that allow a driverless car to most clearly communicate with a pedestrian — and incorporate them into our future Motional vehicle designs.
    We did find a few surprises from the research. We found that the number of conservative pedestrians outnumbered aggressive. Since the participants were drawn from an urban environment, we expected more aggressive. For some pedestrians, vehicle distance seems to be the key decision factor. For others, it is vehicle speed.  And for a smaller population, it is driver awareness interaction.  But perhaps most surprising is how many participants didn’t notice the driver’s absence.  While participants weren’t told that the study involved AVs, cues for the AVs were clear (or so we felt): the vehicle outside had the AV sensor-laden look and on the inside the driver seat was empty.  
    How did Disney’s Principles of Animation become a resource and how has it been used in your efforts?
    To help us achieve the goal of simple, familiar, and intuitive communication of AV intent, we found inspiration in, believe it or not, computer animation. Specifically, we found a well spring within the Disney Principles of Animation. These were first described in The Illusion of Life and are used by Pixar and countless other animation studios to breathe life into inanimate wireframe models. Richard Williams’s book, The Animator’s Survival Kit, was also a source of inspiration. We were surprised at how applicable many of the principles could directly be applied to robotics, even robots without a recognizable human form (e.g., torso, heads, arms, etc.).  
    We utilized a few of these principles within the virtual reality environment in crafting several of the expressive autonomous vehicle behaviors.  Examples include expressive braking, expressive stance (“nose dip” and “tail raise”), expressive engine sounds, expressive braking sounds. 
    How do you envision vehicles organically communicating with pedestrians and what will it take to get there?
    Our vision is that autonomous vehicles will interact with pedestrians and all other humans in ways that are simple, familiar, and intuitive.  
    We think a successful approach is to draw cues from everyday objects and integrate them into AVs. Rather than move like robots, the vehicles will move, not robotically, but in ways that are familiar and predictable.  Rather than ding, the vehicles’ sounds will be directional and illustrative.   
    Expressive robotics for driverless vehicles, then, must be an intersection of these two paradigms: designing technology that communicates its awareness, perception, and intended action specifically for human passengers and pedestrians.
    As Laura Major, Motional’s CTO, notes in her book, What to Expect When You’re Expecting Robots, robots of the future will be social machines.  For vehicles to work in our communities, the vehicles must work with humans.
    As we plan for a future where driverless vehicles are part of our everyday lives, this research is critical in helping the technology enter our lives comfortably, safely, and efficiently. 
    This vision is bigger than one company. Indeed, it is bigger than our industry. More research is needed from our academic community in this area. This is why we are publishing our research results to share our findings, raise awareness, and spark more ideas. 
    We’ve been able to move the driverless industry from the realm of science fiction to where we are today, and to continue on our path to a driverless reality, we must continue to include our pedestrians, passengers, and communities on that journey. More

  • in

    Autonomous air cargo company delivers COVID-19 vaccine

    Autonomous air cargo company Xwing just announced that it successfully completed a delivery of COVID-19 vaccines in the US. It’s a big milestone and a splashy headline for the company and for the burgeoning autonomous cargo sector. 
    The FAA granted approval for the operation in November. The flights do have a pilot onboard. Xwing uses a human-operated software stack that seamlessly integrates with existing aircraft to enable regional pilotless flights. It’s part of a growing effort to make flight more accessible and to reduce training burdens on pilots, who currently have to be trained specifically on every kind of plane they fly.
    “The US has reached a huge milestone in developing the vaccines, but now the challenge remains in broadly distributing those doses across the country quickly and efficiently. At Xwing, we’re honored to have the opportunity to be a part of this operation by using our cargo planes to deliver thousands of vaccines to some of the locations that need it most.”
    The delivery to Holbrook, AZ, is part of a larger nationwide logistics operation to bring Pfizer’s COVID-19 vaccines to the hard-hit and largest Indian Reservation in the United States, the Navajo Nation. It’s a tricky problem because Pfizer’s COVID-19 vaccine has a limited shelf-life. That makes express air cargo operations critical. 
    Earlier this year, Xwing successfully completed non-commercial flight demonstrations of the first fully autonomous air cargo flight out.
    The Navajo Nation has been devastated by the coronavirus pandemic as the region records some of the highest infection rates compared to other states. More

  • in

    Forecast 2021: Artificial Intelligence during COVID and beyond

    Artificial intelligence has been a sector to watch during the pandemic. The enterprise has been seeking new routes to efficiency, organizations with an abundance of furloughed or fired workers have bandwidth crunches, and some companies may have used the hardships of the past year to clean house with the intent of hiring fresh using new technology-driven strategies.
    AI has been touted as a possible solution to all of these problems. It’s also a tool that can help unlock the potential of adjacent technologies and, at its most evolved, potentially reshape how business is done. 
    What follows are predictions informed by a panel of experts who have been watching the space for years. The top line here is that machine learning and AI are finally beginning to penetrate mainstream organizations and will become increasingly commonplace and, in some deployments, crucial in 2021 as the pandemic winds down and businesses reemerge with new outlooks and structures.

    Enterprise AI predictions for 2021
    Hiring and managing employees
    Whether a protracted recession is in the offing or not, there has been a lot of housecleaning during the pandemic and there’s bound to be renewed focus on hiring in 2021. I caught up with Kamal Ahluwalia, President of AI talent management firm Eightfold AI, for some reflections on the changing dynamic of work and the new pressures and opportunities when it comes to personnel.
    “The future of work is diverse and remote work will accelerate the trend,” says Ahluwalia. “As many have noted, the workplace will never be the same after COVID-19 and a significant portion of the workforce may continue to work remotely. Beyond work, the impact on hiring itself can’t be overstated: talent can be recruited from anywhere and everywhere, opening up new possibilities across the board.”
    As you can imagine, Ahluwalia sees AI playing a critical role in those hiring initiatives.

    “AI will be critical to this, enabling the processing of a large volume of candidates and giving everyone a fair chance of getting hired. Geographic boundaries and even sovereign borders will no longer matter when searching for the right talent, you just need the right tools to do so. With new AI tools, people can be hired anywhere to do the job everywhere.”
    That growing geographical pool of talent will potentially be accompanied by a new appreciation for what workers can bring to the table beyond their standard resume skills.
    “Companies will learn how to use AI to understand an individual’s capabilities (not just skills) and start hiring for potential,” says Ahluwalia. “A college degree is still important but perhaps not as much as it once was. In 2021, companies will increasingly look at job requirements and ask what is really needed to do the work. Proven and potential capability will be paramount, and equally so in many cases. This will open up new opportunities for many people, including military veterans and professionals switching industries.”
    Ahluwalia sees the trend extending to government, as well: “2021 could be the year that the public sector takes AI applications seriously – implementing newer technology to shift away from antiquated systems to address talent acquisition, employee experience, and D&I.”
    AI isn’t getting easier, which will lead to a new role in the enterprise
    Ryohei Fujimaki, Ph.D., Founder & CEO of dotData, a leader in AutoML 2.0 software to help accelerate AI/ML development, points to a false promise in some user-friendly enterprise AI offerings and suggests a more satisfying solution.
    “As the need for additional AI applications grows, businesses will need to invest in technologies that help them accelerate and democratize the data science process. This has given rise to what some call “no-code” AI. Many of these “no-code” platforms are workflow-driven, visual drag-and-drop tools (a.k.a. visual programming) that claim to help make AI easier for non-technical people.”
    But if it seems too good to be true, that’s because it is, explains Fujimaki.
    “The problem is that although simple workflows are easy to build and conceptualize, the reality is that most AI/ML models require large, very complex, and sophisticated workflows that quickly become unwieldy and create a whole new set of challenges of their own. In fact, the vast majority of the work that data scientists must perform is often associated with the tasks that precede the selection and optimization of ML models such as feature engineering — the heart of data science. This means that organizations will need to look for new, more sophisticated AutoML 2.0 platforms that enable true no-code end-to-end automation, from automatically creating and evaluating thousands of features (AI-based feature engineering) to the operationalization of ML and AI models — and all the steps in between. In 2021 we will see the rise of AutoML 2.0 platforms that take “no-code” to the next level and finally begin to deliver on the promise of “one-click” no-code development with an environment that automates 100% of the workflow.”
    Complementing the rise of no-code will be a heightened call for a new class of AI developers.
    “… enterprises need an efficient way to scale their AI practices and implement AI in business to accelerate ROI in AI investment,” says Fujimaki. “As organizations face increased pressure to optimize their workflows, more and more businesses will begin asking BI teams to develop and manage AI/ML models. This drive to empower a new class of BI-based “AI developers” will be driven by two critical factors: First, Enabling BI teams with tools like AutoML 2.0 platforms is more sustainable and more scalable than hiring dedicated data scientists. Second, because BI teams are closer to the business use-cases than data scientists, the life-cycle from “requirement” to working model will be accelerated. New AutoML 2.0 platforms that help automate 100% of the AI/ML development process will allow businesses to build faster, more useful models.”
    Digital transformation
    Digital transformation thus far has focused on the digitization of products and services, but there’s good reason to suspect in the coming year there will be more focus on using AI for optimization and automated business decision-making. 
    “The wave of AI-enabled digital transformation will expand from “early adopters” such as financial services, insurance, and manufacturing to all other industries, and AI and machine learning will be embedded into multiple business functions, across key business areas to not only drive efficiencies but also to create new products and services,” says Fujimaki. “One of the key reasons that this is happening now is the availability of AI and ML automation platforms that make it possible for organizations to implement AI quickly and easily without investing in a data science team.”
    AI and ML will go beyond predictions
    By the same token, AI will go beyond the pigeonhole of predictions and insight-delivery to become a multifaceted tool. That includes key functionality like hypothesis generation.
    “While predictions are one of most valuable outcomes, AI and ML must produce actionable insights beyond predictions, that businesses can consume,” says Fujimaki. “AutoML 2.0 automates hypothesis generations (a.k.a. feature engineering) and explores thousands or even millions of hypothesis patterns that were never possible with the traditional manual process. AutoML 2.0 platforms that provide for automated discovery and engineering of data “features” will be used to provide more clarity, transparency and insights as businesses realize that data features are not just suited for predictive analytics, but can also provide invaluable insights into past trends, events and information that adds value to the business by allowing businesses to discover the “unknown unknowns,” trends and data patterns that are important, but that no one had suspected would be true.”
    Beyond the enterprise
    Beyond the enterprise, AI will continue to have domino effect impacts on adjacent technologies, including robotics and automation.
    “Artificial Intelligence (AI) and accompanying sensor technologies will further enable robotics platforms to achieve maximum impact,” says Karen Panetta, IEEE Fellow. “For example, in the era of COVID-19, it is crucial that we develop health screening methods that minimize exposure to both patients and caregivers.  As we, as a society, strive to get such systems in place, the future will rely more heavily on autonomous technologies that can work and learn from other AI-based systems – AI-to-AI systems that can cooperatively divide and conquer tasks will be a boon to many emerging technologies that rely on intelligent systems.”
    The increasing reliance on AI, including by governments and agencies, will also force an evolution in accountability and ethics.
    “We’ve been talking about AI for years as an emerging technology,” says Ayanna Howard, IEEE Senior Member,” but with increased scrutiny placed on its use in applications that may impact our daily lives and civil liberties, we are now seeing an increased focus on accountability.  Thus, I think one of the most important technologies in 2021 will be AI – rather, the ethical use of AI.  AI is not just an application, as its algorithms feed into most of the other technologies we will continue to use, whether that’s entertainment, 3D design, or even shopping chatbots.”
    Predictions should always be taken with a grain of salt, but the pandemic has created an unusual opportunity for a technology aimed at efficiency and lean operations. We’ll be tracking the sector closely in the year ahead and will be following up with our experts as 2021 comes into its own. More

  • in

    Stunning maps visualize drone laws around the world

    The global drone industry is projected to double over the next five years, from $22.5 billion in 2020 to $42.8 billion in 2025. Why?Because drones are being rolled out for defense, conservation, disaster relief, agriculture, real estate, entertainment and a whole bunch of other sectors.
    But the rollouts are not evenly distributed around the world or even around the United States. In fact, as time passes, regulations seem to be varying massively depending where you are in the world. That’s created a patchwork system that makes universal compliance nearly impossible, which has hampered drone development.
    Or so argues the drone sector at large. Privacy, safety, and noise pollution advocates, on the other hand, see good reason to pull the air brakes. Drones are only set to increase in popularity as the price of units continues to drop, creating major privacy concerns as flight ranges increase beyond three miles and sensor payloads expand to include 4K video and more exotic sensors like thermal. 
    In fact, despite the apparent tangle of laws, lawmakers around the world are actually struggling to keep up with the pace of development. And while at least 143 countries have enacted some form of drone-related regulation, many experts contend that current drone regulation is insufficient to deal with the threat of widespread surveillance. 
    To get a sense of the evolving tapestry of legislation just take a look at the embedded visual representations of drone laws around the world. The maps are from a VPN company called Surfshark, which compiled public data on drone legislation to create the visualizations.


    Roughly, the team found that most countries fell into one of seven categories: Outright ban, effective ban, restrictions apply (such as drone registration or licensing, additional observers required, no commercial usage etc), visual line of sight required, experimental visual line of sight (experiments where drones fly beyond the line of sight are allowed), unrestricted (when flying away from private property and airports, under 500 ft/150metres height and with drones weighing less than 250g), and no drone-related legislation.
    Broken out by geographic area, the maps show regulation discrepancies even within close-knit regions, which illustrates the compliance challenges facing developers and commercial adopters.

    The multi-colored tapestry of regulations in Europe and the Middle East may be the most glaring examples of how differentiated the regulation landscape currently is, even among close geographical neighbors.

    The research behind the map project is available here. More

  • in

    Cheap GPS jammers a major threat to drones

    With rotors whirring and airframes hurling through the air, drones can be very dangerous when flights don’t go as planned. There’s been much teeth gnashing over the FAA’s measured approach to commercial drone policy adoption, but the fact is there are real dangers, including from bad actors using inexpensive GPS jammers.
    GPS signal jamming technology is evolving, decreasing in size and cost. Today, jammers can be bought online for as low as $50. Long a threat to military assets, jamming is now a commercial concern as commercial drone deliveries become a reality, and attacks are becoming pervasive globally. This threat now affects commercial, law enforcement, and defense drones on critical missions.
    During a choreographed light show in Hong Kong in 2018, a jamming device caused 46 drones to fall out of the sky. The resulting property damage and loss of hardware cost an estimated HK$1M. Nearly all drones have safety protocols to send them home or to some safe landing location in the event of disruption. But those features proved ineffective at the Hong Kong show.
    “These are professional drones, which are already built with technologies that would direct them back to the take-off origin,” Anthony Lau Chun-hon, director of the event’s board, told the South China Morning Post. “But the signals were so strong that many of them just dropped from the air.”
    Drones and their services are decidedly dependent on GPS signals. Even though the drone may be equipped with back-up methods (INS/OPS), GPS references are still required for positioning, navigation, and stabilization. GPS attacks, therefore, are the easiest way to take a drone down and potentially cause harm to life and property.
    The good news is that solutions are arriving for commercial grade drones, and they come straight from the defense sector. One comes from a company called infiniDome, which makes a device called GPSdome that integrates into a drone’s GNSS receivers and employs a unique interference filtering system that combines the patterns from two omnidirectional antennas. In real-time, GPSdome analyzes the interference signal and feeds its properties into infiniDome’s proprietary algorithm to filter and reject any attacking RF interference allowing the UAS to continue GPS signal reliance during a jamming attack. Upon detection of a jamming signal, GPSdome notifies operators of a possible signal jamming interference.
    Easy Aerial, a leading provider of autonomous drone-based monitoring solutions for commercial, government, and defense applications, recently integrated GPSdome into its line of military-grade autonomous unmanned aerial systems.

    “We chose GPSdome because it’s a proven solution that perfectly suits the diverse missions our customers routinely fly in some of the world’s most inhospitable and hostile environments,” said Ido Gur, co-founder & CEO of Easy Aerial. “While our systems are equipped with multiple onboard redundancies, GPS signals are vital to maintaining position, navigation, and timing accuracy, ensuring uninterrupted operation.”
    This kind of lightweight, small form factor, low power consumption shielding technology is a crucial step toward widespread adoption of commercial drones. Other companies, such as Sepentrio, are also taking GPS jamming in drone applications seriously with integrated sensor solutions. 
    It’s a sure bet there will be more attention given to the problem of GPS jamming and spoofing in the days ahead. Fortunately, it looks like developers are beginning to see tools that can help them counteract the growing threat. More

  • in

    Dirty job: Cute robot roughneck heads to offshore oil rig

    A nimble robotic quadruped made famous in a flurry of viral videos will head offshore to help oil companies keep offshore installations running smoothly. This is the latest deployment for Spot, a robot created by Boston Dynamics that’s amassing an impressively diverse resume as its adopted by more commercial enterprises.
    After an initial early adopter program concluded successfully, Spot officially went on sale to commercial users earlier this year. The oil rig deployment is a good example of the utility of a nimble, task agnostic platform that can be used for inspection in heavy industries and dangerous environments.
    The deployment is the work of Cognite, a global industrial AI software-as-a-service (SaaS) company, which partnered with Aker BP to deploy Spot on the Skarv installation, 210 kilometers offshore in the North Sea. The mission was designed to test how a platform like Spot might be used to collect images, scans, and sensor readings on the rig.
    “Missions like these demonstrate Spot’s value in difficult environments. Cognite continues to excel in testing and validating Spot’s ability to reduce risk to humans and provide value in the energy industry,” said Michael Perry, Vice President of Business Development at Boston Dynamics.
    The test run is part of a robotics-driven digital transformation for Aker BP. Data from Spot was available almost instantly via a Cognite dashboard, and Spot was remote controlled from a Cognite home office onshore demonstrating how teleoperated robots can effectively conduct missions in sensitive environments. 
    “This historic pairing of minds and machines working together to solve industry problems demonstrates that data driven decisions can change industry now,” said Dr. John Markus Lervik, CEO of Cognite. “This ability to guide Spot by remote control is a huge step forward for the industry and something we will continue to work closely with our partners on as we continue to innovate and provide data driven solutions.”
    Designed as a task-agnostic autonomous platform, Spot is well suited to applications like pipeline and infrastructure inspection, security & defense, and search & rescue. Under an early adopter program, Boston Dynamics previously released 150 Spot robots to businesses and research institutions, where they were used in power generation facilities, factory floors, and construction sites, to name a few. In one deployment, a construction firm in Canada used a Spot robot to automate the capture of thousands of images weekly on a 500,000 square foot building site, creating an ongoing record of progress and enabling the builders to identify growing problems and inefficiencies early. NASA’s JPL also used spot in DARPA’s SubT challenge.

    The trial of Spot in oil infrastructure inspection presents a promising opportunity for Boston Dynamics and demonstrates how task-agnostic robots might play increasingly prominent roles in digital transformation efforts of heavy asset industries.
    “We are eager to explore how robotics systems can make offshore operations safer, more efficient, and more sustainable. The Spot offshore visit at the Skarv FPSO is one small step towards Aker BP’s vision to digitalize all our operations from cradle to grave to increase productivity, enhance quality, and improve the safety of our employees,” says Karl Johnny Hersvik, CEO of Aker BP. More

  • in

    Curb weight: Why electric vehicles are putting on pounds

    Remember GM’s electric gem, the EV1? It was a cool vehicle, decidedly ahead of an industry that would continue to drag its feet in the galumphing slog toward inevitable electric vehicle dominance.
    It also looked something like a well-funded college science team’s first-place entry. The car wasn’t exactly a featherweight at around 3000 pounds, but it was clearly designed with weight and aerodynamics in mind. A full 1,175 pounds of the car’s total weigh-in was battery weight thanks its lead acid bank (a later version replaced the lead acid bank with lithium.) As a result, everything else about the vehicle had to be trimmed to the bone.
    It seemed for a while that would be the fate of electric and hybrid-electric vehicles into the foreseeable future. So it might seem puzzling that electric car makers seem to have largely cast off concerns over weight — and in fact are embracing decidedly heavy vehicles. The Audi e-tron weighs in at a whopping 6000 pounds, more than many midsized trucks.
    What gives? Why aren’t electric car makers watching the scales anymore like a prize fighter’s fretting manager? 
    A new report from Lux Research, “Electric Vehicle Lightweighting 2030,” provides some good answers while analyzing the future of vehicle lightweighting, the industry’s term for putting development cars on strict diets. The answer comes down to power efficiencies that, it turns out, are more than enough to offset the benefits of lightweighting. 
    “Battery electric vehicles (BEVs) are overwhelmingly more efficient than internal combustion engine (ICE) vehicles due to regenerative braking and more efficient motors and are increasingly outgrowing the issue of limited range,” says Anthony Schiavo, Senior Analyst at Lux. “Materials companies need to start planning for a fully mature BEV space.” 
    According to the research, we’ll see roughly a 15% increase in battery pack energy densities over the next decade. That means car companies can increase range or reduce size and keep range the same. Lux modeled both scenarios and determined that in order for lightweighting to be a cost-effective solution against batteries by 2030, it would need to cost, on average, less than $5 per kilogram of weight saved.

    In other words, saving on weight won’t cease to be a priority, but it will be de-emphasized as developers have more levers to pull. The weight saving strategies will also start to focus on specific parts of the vehicle that offer best value for weight savings. The same will be true for robots, which have long been constrained by similar energy density challenges.
    “We predict vehicle structure will be an opportunity for high-strength steel and aluminum, as they provide weight reductions at minimal cost,” Schiavo continues. “Bumpers are expected to benefit from design advancements that utilize glass fiber, carbon fiber, and thermoplastics. Other material priorities, such as sustainability, durability, and end-of-life issues, however, will take priority over lightweighting by 2030.”
    One benefit of a reduced focus on lightweighting may be increasing manufacturing efficiencies. Electric vehicles will more commonly be designed around shared rolling frames or platforms in the future, such as Volkswagen’s MEB – a shared battery architecture it plans to use for its BEV fleet.  More