More stories

  • in

    Testing sewage to home in on Covid-19

    Covid-19 is a respiratory illness that spreads when infected individuals shed the novel coronavirus (SARS-CoV-2) that causes it. While this seems to happen chiefly through close contact and respiratory droplets, evidence has mounted that the disease can also spread through airborne transmission. Distancing, masks, and improved ventilation are all critical interventions to interrupt this spread.
    Many suffering from Covid-19 also shed the virus in their stool. With adequate plumbing, this is an unlikely source of virus transmission — but with the right tools, it can also be an unlikely source of virus detection. Viral traces of the novel coronavirus SARS-CoV-2 can be detected in sewage up to a week before physical symptoms occur. This means that wastewater can serve as an early warning signal that Covid is present in a community. In larger communities, however, it can be difficult to further narrow down where infections are occurring. 
    A recent paper co-authored by Richard Larson, a professor with the Institute for Data, Systems, and Society at MIT, details two “tree-searching” algorithms that can dynamically and adaptively select which maintenance holes in a community to test to lead to sources of potential outbreak. “The algorithms rely strongly on the structure of the sewage pipeline network,” says Larson. “It’s a ‘tree network,’ where sewage flows in one direction from its source through a unique path to the wastewater treatment plant.”
    Leveraging this tree graph structure, Larson and his co-authors — Oded Berman of the University of Toronto and Mehdi Nourinejad of York University — developed two algorithms. The first is designed for a community that initially has zero infections, and the second for a community known to have many infections. 
    Several wastewater treatment plants around the world are testing for coronavirus to estimate the extent of community infection. The first algorithm is designed to respond when wastewater at a treatment plant has just revealed traces of SARS-CoV-2, indicating existence of a new infection in the community. That algorithm usually identifies the city block or even portion of a city block in which the infected person resides.
    In the case of more widespread infection, the second algorithm homes in on the most infected neighborhoods or “hot zones,” usually several city blocks.
    MIT has recently begun to test wastewater to help detect Covid-19 on campus, with sampling ports collecting sewage from the exit pipes of several buildings. Dorms house dozens, though, and treatment plants serve thousands. With a dorm, a positive result could mean targeted follow-up measures like individual testing and quarantining. With wastewater treatment plants, results can be a useful indicator of community infection, but are often too broad for localized responses. 
    Larson thinks the next step could be sequential testing of wastewater from a fraction of a community’s many maintenance holes. “With hundreds of manholes, we could test about six to 10 and find a source area of 100 people or less,” says Larson. “The group to be tested is now the set of individuals resident in the source manhole’s immediate ‘catchment area.’”
    Larson’s research could make up for shortfalls in widespread community testing, which continue to be a challenge in many places. Testing thousands of people requires equipment, labor, and other resources, not to mention buy-in from affected communities. Finding newly-infected people can be like looking for a needle in a haystack. “Successful implementation of this algorithm could greatly reduce the size of that haystack,” Larson says.
    While the mathematics of the algorithm have been developed and tested with numerous datasets, the operational implementation of the method awaits the invention of a fast, accurate, and inexpensive SARS-CoV-2 test to be done at the maintenance holes. Current viral detection research at MIT and elsewhere is close to developing such a test, at which point the method could be tested in the field. 
    “In-field testing may also identify other issues involving the flows of infected sewage in pipeline systems,” warns Larson, “issues to be worked out before reliable implementation can be achieved.”
    Still, the algorithm-driven wastewater testing could provide single neighborhoods with an early warning sign of coronavirus infection, triggering targeted follow-up via testing and distancing. This could help minimize disease spread, ease the strain on health systems — and even save lives. More

  • in

    AI Cures: data-driven clinical solutions for Covid-19

    Modern health care has been reinvigorated by the widespread adoption of artificial intelligence. From speeding image analysis for radiology to advancing precision medicine for personalized care, AI has countless applications, but can it rise to the challenge in the fight against Covid-19?
    Researchers from the Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), now housed within the MIT Stephen A. Schwarzman College of Computing, say the ongoing public health crisis provides ample opportunities for leveraging AI technologies, such as accelerating the search for effective therapeutics and drugs that can treat the disease, and are actively working to translate this potential to success.
    AI Cures
    When Covid-19 began to spread worldwide, Jameel Clinic’s community of machine learning and life science researchers redirected their work and began exploring how they can collaborate on the search for solutions by tapping into their collective knowledge and expertise. The ensuing discussions led to the launch of AI Cures, an initiative dedicated to developing machine learning methods for finding promising antiviral molecules for Covid-19 and other emerging pathogens, and to lower the barrier for people from varied backgrounds to get involved by inviting them to contribute to the effort.
    As part of the mission of AI Cures to have broad impact and engagement, Jameel Clinic brought together researchers, clinicians, and public health specialists for a conference focused on the development of AI algorithms for the clinical management of Covid-19 patients, early detection and monitoring of the disease, preventing future outbreaks, and ways in which these technologies have been utilized in patient care.
    Data-driven clinical solutions
    On Sept. 29, over 650 people representing 50 countries and 70 organizations logged on from around the globe for the virtual AI Cures Conference: Data-driven Clinical Solutions for Covid-19.
    In welcoming the audience, Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, remarked that “AI in health care is moving beyond the use of computing as just simple tools, to capabilities that really aid in the processes of discovery, diagnosis, and care. The potential for AI-accelerated discovery is particularly relevant in times such as these.”
    Attendees heard from 14 other speakers, including MIT researchers, on technologies they developed over the past six months in response to the pandemic — from epidemiological models created using clinical data to predict the risk of both infection and death for individual patients, to a wireless device that allows doctors to monitor Covid-19 patients from a distance, to a machine learning model that pinpoints patients at risk for intubation before they crash.
    James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering, and faculty co-lead of life sciences for Jameel Clinic, gave the first talk of the day on harnessing synthetic biology to develop diagnostics to address Covid-19 and how his lab is using deep learning to enhance the design of such systems. Collins and his team are utilizing AI techniques to create a set of algorithms to effectively predict the efficacy of RNA-based sensors. The sensors, first developed in 2014 to detect the Ebola virus and later tailored for the Zika virus in 2016, were designed and optimized for a Covid-19 diagnostic, and related CRISPR-based biosensors are being used in a mask developed in Collins’ lab that produces a detectable signal when a person with the virus breathes, coughs, or sneezes.
    While AI has proven to be an effective tool in health care, a model requires good data for it to be valuable and useful. With Covid-19 being a new disease, limited amounts of information are available to researchers, and in order to advance even more efforts to combat the virus, Collins notes that “we need to put in place and secure the resources to generate and collect large amounts of well-characterized data to train deep learning models. At present we generally don’t have such large datasets. In the system we developed, our dataset consists of about 91,000 RNA elements, which is currently the largest available for RNA synthetic biology, but it should be larger and expanded to many more different sensors.”
    Offering perspective from the clinical side, Constance Lehman, a professor at Harvard Medical School (HMS), discussed the ways in which she’s implementing AI tools in her work as director of breast imaging at Massachusetts General Hospital (MGH). In collaboration with Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and faculty co-lead of AI for Jameel Clinic, Lehman designs machine learning models to aid in breast cancer detection, which became a critical tool when mammography screenings were put on hold during the emergency stay-at-home-order issued in Massachusetts last March. By the time screenings reopened in May, around 15,000 mammograms had been cancelled. MGH is gradually rescheduling patients using a model developed by Lehman and Barzilay to help ease the process. “We took those women that had been diverted from screening and ranked them by their AI risk models and we reached out to them, inviting them back in.”
    However, according to Lehman, many are choosing to opt out of screening and, in particular, fewer women of color are returning. “There are many determinants of who returns for screening. Social determinants can swamp all of our best, most scientific evidence-based approaches to effective and equitable health care. We’re delighted that our risk model is equally predictive across races, but I am dismayed to see that we are screening more white women than women of color during these times. Those are social determinants, which we are working very hard on.”
    The conference culminated in a panel discussion with those who are at the front line of the pandemic. The panelists — Gabriella Antici, founder of the Protea Institute in Brazil; Rajesh Gandhi, a professor at HMS and an infectious disease physician at MGH; Guillermo Torre, a professor of cardiology and president of TEC Salud in Mexico; and Karen Wong, data science unit lead for the Covid-19 clinical team at the U.S. Centers for Disease Control and Prevention — shared their experiences in handling the crisis and had an open conversation with Barzilay, the panel’s moderator, on the limitations of AI and what is currently not being addressed.
    “Those from the AI community like myself are always asking ourselves if we are solving the right problems,” says Barzilay. “We hope to come up with new ideas for AI solutions and what we can do in the future to help.”
    Gandhi offered that “we need more refined and sophisticated approaches to deciding when to use different drugs and how to use them in combination.” He also suggested that integrating physiologic data could be useful in considering how to treat individual patients from different age ranges exhibiting a variety of Covid-19 symptoms, from mild to severe.
    In her closing remarks, Barzilay expressed hope that the conference “illustrates the types of problems that we need to be addressing on the AI side” and notes that Jameel Clinic will widely share any new data they obtain so that everyone can benefit to help patients suffering from Covid-19.
    The event was the first in a pair of conferences that took place as part of the AI Cures initiative. The next event, AI Cures Drug Discovery Conference, which will focus on cutting-edge AI approaches in this area developed by MIT researchers and their collaborators, will be held virtually on Oct. 30.
    AI Cures: Data-driven Clinical Solutions was organized by Jameel Clinic, MIT Schwarzman College of Computing, and Institute for Medical Engineering and Sciences. Additional support was provided by the Patrick J. McGovern Foundation. More

  • in

    Leveraging a 3D printer “defect” to create a new quasi-textile

    Sometimes 3D printers mess up. They extrude too much material, or too little, or deposit material in the wrong spot. But what if this bug could be turned into a (fashionable) feature?
    Introducing DefeXtiles, a tulle-like textile that MIT Media Lab graduate student Jack Forman developed by controlling a common 3D printing defect — the under-extrusion of polymer filament.
    Forman used a standard, inexpensive 3D printer to produce sheets and complex 3D geometries with a woven-like structure based on the “glob-stretch” pattern produced by under-extrusion. Forman has printed these flexible and thin sheets into an interactive lampshade, full-sized skirts, a roll of fabric long enough to stretch across a baseball diamond, and intricately patterned lace, among other items.

    Forman, who works in the Tangible Media research group with Professor Hiroshi Ishii, presented and demonstrated the DefeXtiles research on Oct. 20 at the Association for Computing Machinery Symposium on User Interface Software and Technology. The material may prove immediately useful for prototyping and customizing in fashion design, Forman says, but future applications also could include 3D-printed surgical mesh with tunable mechanical properties, among other items.
    “In general, what excites me most about this work is how immediately useful it can be to many makers,” Forman says. “Unlike previous work, the fact that no custom software or hardware is needed — just a relatively cheap $250 printer, the most common type of printer used — really makes this technique accessible to millions of people.”
    “We envision that the materials of the future will be dynamic and computational,” says Ishii. “We call it ‘Radical Atoms.’ DefeXtiles is an excellent example of Radical Atoms, a programmable matter that emulates the properties of existing materials and goes beyond. We can touch, feel, wear, and print them.”
    Joining Forman and Ishii on the project are Computer Science and Artificial Intelligence Laboratory and Department of Electrical Engineering and Computer Science graduate student Mustafa Doga Dogan, and Hamilton Forsythe, an MIT Department of Architecture undergraduate researcher.
    Filaments to fabric
    Forman had been experimenting with 3D printing during the media arts and sciences class MAS.863 / 4.140 / 6.943 (How to Make (Almost) Anything), led by Professor Neil Gershenfeld, director of the MIT Center for Bits and Atoms. Forman’s experiments were inspired by the work of a friend from his undergraduate days at Carnegie Mellon University, who used under-extruded filament to produce vases. With his first attempts at under-extruding, “I was annoyed because the defects produced were perfect and periodic,” he says, “but then when I started playing with it, bending it and even stretching it, I was like, ‘whoa, wait, this is a textile. It looks like it, feels likes it, bends like it, and it prints really quickly.”
    “I brought a small sample to my class for show and tell, not really thinking much of it, and Professor Gershenfeld saw it and he was excited about it,” Forman adds.
    When a 3D printer under-extrudes material, it produces periodic gaps in the deposited material. Using an inexpensive fused deposition modeling 3D printer, Forman developed an under-extruding process called “glob-stretch,” where globs of thermoplastic polymer are connected by fine strands. The process produces a flexible, stretchy textile with an apparent warp and weft like a woven fabric. Forman says it feels something like a mesh jersey fabric.
    “Not only are these textiles thinner and faster to print than other approaches, but the complexity of demonstrated forms is also improved. With this approach we can print 3D dimensional shell forms with a normal 3D printer and no special slicer software,” says Forman. “This is exciting because there’s a lot of opportunities with 3D printing fabric, but it’s really hard for it to be easily disseminated, since a lot of it uses expensive machinery and special software or special commands that are generally specific to a printer.”
    The new textile can be sewn, de-pleated, and heat-bonded like an iron-on patch. Forman and his colleagues have printed the textiles using many common 3D printing materials, including a conductive filament that they used to produce a lamp that can be lit and dimmed by touching pleats in the lampshade. The researchers suggest that other base materials or additives could produce textiles with magnetic or optical properties, or textiles that are more biodegradable by using algae, coffee grounds, or wood.
    According to Scott Hudson, a professor at Carnegie Mellon University’s Human-Computer Interaction Institute, Forman’s work represents a very interesting addition to the expanding set of 3D-printing techniques. 
    “This work is particularly important because it functions within the same print process as more conventional techniques,” notes Hudson, who was not part of the study. “This will allow us to integrate custom 3D-printed textile components — components that can be flexible and soft — into objects, along with more conventional hard parts.”
    Lab @home
    When MIT closed down at the start of the Covid-19 pandemic, Forman was in the midst of preparing for the ACM symposium submission. He relocated his full lab set up to the basement of his parents’ cabin near Lake Placid, New York.
    “It’s not a lot of large equipment, but it’s lots of little tools, pliers, filaments,” he explains. “I had to set up two 3D printers, a soldering station, a photo backdrop — just because the work is so multidisciplinary.”
    At the cabin, “I was able to hone in and focus on the research while the world around me was on fire, and it was actually a really good distraction,” Forman says. “It was also interesting to be working on a project that was so tech-focused, and then look out the window and see nature and trees — the tension between the two was quite inspiring.”
    It was an experience for his parents as well, who got to see him “at my most intense and focused, and the hardest I’ve worked,” he recalls. “I’d be going upstairs at 5 a.m. for a snack when my dad was coming down for breakfast.”  
    “My parents became part of the act of creation, where I’d print something and go, ‘look at this,’” he says. “I don’t know if I’ll ever have the opportunity again to have my parents so closely touch what I do every day.”
    One of the more unusual aspects of the project has been what to call the material. Forman and his colleagues use the term “quasi-textile” because DefeXtiles doesn’t have all the same physical qualities of a usual textile, such as a bias in both directions and degree of softness. But some skeptics have been converted when they feel the material, Forman says.
    The experience reminds him of the famous René Magritte painting “The Treachery of Objects (This Is Not a Pipe),” where the illustration of a pipe prompts a discussion about whether a representation can fully encompass all of an object’s meanings. “I’m interested in the coupling between digital bits and the materials experience by computationally fabricating high-fidelity materials with controllable forms and mechanical properties,” Forman explains.
    “It makes me think about when the reference of the thing becomes accepted as the thing,” he adds. “It’s not the decision people make, but the reasoning behind it that interests me, and finding what causes them to accept it or reject it as a textile material.” More

  • in

    Stressed on the job? An AI teammate may know how to help

    Humans have been teaming up with machines throughout history to achieve goals, be it by using simple machines to move materials or complex machines to travel in space. But advances in artificial intelligence today bring possibilities for even more sophisticated teamwork — true human-machine teams that cooperate to solve complex problems.
    Much of the development of these human-machine teams focuses on the machine, tackling the technology challenges of training AI algorithms to perform their role in a mission effectively. But less focus, MIT Lincoln Laboratory researchers say, has been given to the human side of the team. What if the machine works perfectly, but the human is struggling?
    “In the area of human-machine teaming, we often think about the technology — for example, how do we monitor it, understand it, make sure it’s working right. But teamwork is a two-way street, and these considerations aren’t happening both ways. What we’re doing is looking at the flip side, where the machine is monitoring and enhancing the other side — the human,” says Michael Pietrucha, a tactical systems specialist at the laboratory. 
    Pietrucha is among a team of laboratory researchers that aims to develop AI systems that can sense when a person’s cognitive fatigue is interfering with their performance. The system would then suggest interventions, or even take action in dire scenarios, to help the individual recover or to prevent harm. 
    “Throughout history, we see human error leading to mishaps, missed opportunities, and sometimes disastrous consequences,” says Megan Blackwell, former deputy lead of internally funded biological science and technology research at the laboratory. “Today, neuromonitoring is becoming more specific and portable. We envision using technology to monitor for fatigue or cognitive overload. Is this person attending to too much? Will they run out of gas, so to speak? If you can monitor the human, you could intervene before something bad happens.”
    This vision has its roots in decades-long research at the laboratory in using technology to “read” a person’s cognitive or emotional state. By collecting biometric data — such as video and audio recordings of a person speaking — and processing these data with advanced AI algorithms, researchers have uncovered biomarkers of various psychological and neurobehavioral conditions. These biomarkers have been used to train models that can accurately estimate the level of a person’s depression, for example.
    In this work, the team will apply their biomarker research to AI that can analyze an individual’s cognitive state, encapsulating how fatigued, stressed, or overloaded a person is feeling. The system will use biomarkers derived from physiological data such as vocal and facial recordings, heart rate, EEG and optical indications of brain activity, and eye movement to gain these insights.
    The first step will be to build a cognitive model of an individual. “The cognitive model will integrate the physiological inputs and monitor the inputs to see how they change as a person performs particular fatiguing tasks,” says Thomas Quatieri, who leads several neurobehavioral biomarker research efforts at the laboratory. “Through this process, the system can establish patterns of activity and learn a person’s baseline cognitive state involving basic task-related functions needed to avoid injury or undesirable outcomes, such as auditory and visual attention and response time.”
    Once this individualized baseline is established, the system can start to recognize deviations from normal and predict if those deviations will lead to mistakes or poor performance.
    “Building a model is hard. You know you got it right when it predicts performance,” says William Streilein, principal staff in the Lincoln Lab’s Homeland Protection and Air Traffic Control Division. “We’ve done well if the system can identify a deviation, and then actually predict that the deviation is going to interfere with the person’s performance on a task. Humans are complex; we compensate naturally to stress or fatigue. What’s important is building a system that can predict when that deviation won’t be compensated for, and to only intervene then.”
    The possibilities for interventions are wide-ranging. On one end of the spectrum are minor adjustments a human can make to restore performance: drink coffee, change the lighting, get fresh air. Other interventions could suggest a shift change or transfer of a task to a machine or other teammate. Another possibility is using transcranial direct current stimulation, a performance-restoring technique that uses electrodes to stimulate parts of the brain and has been show to be more effective than caffeine in countering fatigue, with fewer side effects.
    On the other end of the spectrum, the machine might take actions necessary to ensure the survival of the human team member when the human is incapable of doing so. For example, an AI teammate could make the “ejection decision” for a fighter pilot who has lost consciousness or the physical ability to eject themselves. Pietrucha, a retired colonel in the U.S. Air Force who has had many flight hours as a fighter/attack aviator, sees the promise of such a system that “goes beyond the mere analysis of flight parameters and includes analysis of the cognitive state of the aircrew, intervening only when the aircrew can’t or wont,” he says. 
    Determining the most helpful intervention, and its effectiveness, depends on a number of factors related to the task at hand, dosage of the intervention, and even a user’s demographic background. “There’s a lot of work to be done still in understanding the effects of different interventions and validating their safety,” Streilein says. “Eventually, we want to introduce personalized cognitive interventions and assess their effectiveness on mission performance.”
    Beyond its use in combat aviation, the technology could benefit other demanding or dangerous jobs, such as those related to air traffic control, combat operations, disaster response, or emergency medicine. “There are scenarios where combat medics are vastly outnumbered, are in taxing situations, and are as every bit as tired as everyone else. Having this kind of over-the-shoulder help, something to help monitor their mental status and fatigue, could help prevent medical errors or even alert others to their level of fatigue,” Blackwell says.
    Today, the team is pursuing sponsorship to help develop the technology further. The coming year will be focused on collecting data to train their algorithms. The first subjects will be intelligence analysts, outfitted with sensors as they play a serious game that simulates the demands of their job. “Intelligence analysts are often overwhelmed by data and could benefit from this type of system,” Streilein says. “The fact that they usually do their job in a ‘normal’ room environment, on a computer, allows us to easily instrument them to collect physiological data and start training.”
    “We’ll be working on a basis set of capabilities in the near term,” Quatieri says, “but an ultimate goal would be to leverage those capabilities so that, while the system is still individualized, it could be a more turnkey capability that could be deployed widely, similar to how Siri, for example, is universal but adapts quickly to an individual.” In the long view, the team sees the promise of a universal background model that could represent anyone and be adapted for a specific use. 
    Such a capability may be key to advancing human-machine teams of the future. As AI progresses to achieve more human-like capabilities, while being immune from the human condition of mental stress, it’s possible that humans may present the greatest risk to mission success. An AI teammate may know just how to lift their partner up. More

  • in

    Autonomous boats could be your next ride 

    The feverish race to produce the shiniest, safest, speediest self-driving car has spilled over into our wheelchairs, scooters, and even golf carts. Recently, there’s been movement from land to sea, as marine autonomy stands to change the canals of our cities, with the potential to deliver goods and services and collect waste across our waterways. 
    In an update to a five-year project from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Senseable City Lab, researchers have been developing the world’s first fleet of autonomous boats for the City of Amsterdam, the Netherlands, and have recently added a new, larger vessel to the group: “Roboat II.” Now sitting at 2 meters long, which is roughly a “Covid-friendly” 6 feet, the new robotic boat is capable of carrying passengers.

    Play video

    Roboat, the autonomous robotic boat

    Alongside the Amsterdam Institute for Advanced Metropolitan Solutions, the team also created navigation and control algorithms to update the communication and collaboration among the boats. 
    “Roboat II navigates autonomously using algorithms similar to those used by self-driving cars, but now adapted for water,” says MIT Professor Daniela Rus, a senior author on a new paper about Roboat and the director of CSAIL. “We’re developing fleets of Roboats that can deliver people and goods, and connect with other Roboats to form a range of autonomous platforms to enable water activities.” 
    Self-driving boats have been able to transport small items for years, but adding human passengers has felt somewhat intangible due to the current size of the vessels. Roboat II is the “half-scale” boat in the growing body of work, and joins the previously developed quarter-scale Roboat, which is 1 meter long. The third installment, which is under construction in Amsterdam and is considered to be “full scale,” is 4 meters long and aims to carry anywhere from four to six passengers. 
    Aided by powerful algorithms, Roboat II autonomously navigated the canals of Amsterdam for three hours collecting data, and returned back to its start location with an error margin of only 0.17 meters, or fewer than 7 inches. 
    “The development of an autonomous boat system capable of accurate mapping, robust control, and human transport is a crucial step towards having the system implemented in the full-scale Roboat,” says senior postdoc Wei Wang, lead author on a new paper about Roboat II. “We also hope it will eventually be implemented in other boats in order to make them autonomous.”
    Wang wrote the paper alongside MIT Senseable City Lab postdoc Tixiao Shan, research fellow Pietro Leoni, postdoc David Fernandez-Gutierrez, research fellow Drew Meyers, and MIT professors Carlo Ratti and Daniela Rus. The work was supported by a grant from the Amsterdam Institute for Advanced Metropolitan Solutions in the Netherlands. A paper on Roboat II will be virtually presented at the International Conference on Intelligent Robots and Systems. 
    To coordinate communication among the boats, another team from MIT CSAIL and Senseable City Lab, also led by Wang, came up with a new control strategy for robot coordination. 
    With the intent of self-assembling into connected, multi-unit trains — with distant homage to children’s train sets — “collective transport” takes a different path to complete various tasks. The system uses a distributed controller, which is a collection of sensors, controllers, and associated computers distributed throughout a system), and a strategy inspired by how a colony of ants can transport food without communication. Specifically, there’s no direct communication among the connected robots — only one leader knows the destination. The leader initiates movement to the destination, and then the other robots can estimate the intention of the leader, and align their movements accordingly. 
    “Current cooperative algorithms have rarely considered dynamic systems on the water,” says Ratti, the Senseable City Lab director. “Cooperative transport, using a team of water vehicles, poses unique challenges not encountered in aerial or ground vehicles. For example, inertia and load of the vehicles become more significant factors that make the system harder to control. Our study investigates the cooperative control of the surface vehicles and validates the algorithm on that.” 
    The team tested their control method on two scenarios: one where three robots are connected in a series, and another where three robots are connected in parallel. The results showed that the coordinated group was able to track various trajectories and orientations in both configurations, and that the magnitudes of the followers’ forces positively contributed to the group — indicating that the follower robots helped the leader. 
    Wang wrote a paper about collective transport alongside Stanford University PhD student Zijian Wang, MIT postdoc Luis Mateos, MIT researcher Kuan Wei Huang, Stanford Assistant Professor Mac Schwager, Ratti, and Rus. 
    Roboat II
    In 2016, MIT researchers tested a prototype that could move “forward, backward, and laterally along a pre-programmed path in the canals.” Three years later, the team’s robots were updated to “shapeshift” by autonomously disconnecting and reassembling into a variety of configurations. 
    Now, Roboat II has scaled up to explore transportation tasks, aided by updated research. These include a new algorithm for Simultaneous Localization and Mapping (SLAM), a model-based optimal controller called nonlinear model predictive controller, and an optimization-based state estimator, called moving horizon estimation. 
    Here’s how it works: When a passenger pickup task is required from a user at a specific position, the system coordinator will assign the task to an unoccupied boat that’s closest to the passenger. As Roboat II picks up the passenger, it will create a feasible path to the desired destination, based on the current traffic conditions. 
    Then, Roboat II, which weighs more than 50 kilograms, will start to localize itself by running the SLAM algorithm and utilizing lidar and GPS sensors, as well as an inertial measurement unit for localization, pose, and velocity. The controller then tracks the reference trajectories from the planner, which updates the path to avoid obstacles that are detected to avoid potential collisions.  
    The team notes that the improvements in their control algorithms have made the obstacles feel like less of a giant iceberg since their last update; the SLAM algorithm provides a higher localization accuracy for Roboat, and allows for online mapping during navigation, which they didn’t have in previous iterations. 
    Increasing the size of Roboat also required a larger area to conduct the experiments, which began in the MIT pools and subsequently moved to the Charles River, which cuts through Boston and Cambridge, Massachusetts.
    While navigating the congested roads of cities alike can lead drivers to feel trapped in a maze, canals largely avoid this. Nevertheless, tricky scenarios in the waterways can still emerge. Given that, the team is working on developing more efficient planning algorithms to let the vessel handle more complicated scenarios, by applying active object detection and identification to improve Roboat’s understanding of its environment. The team plans to estimate disturbances such as currents and waves, to further improve the tracking performance in more noisy waters. 
    “All of these expected developments will be incorporated into the first prototype of the full-scale Roboat and tested in the canals of the City of Amsterdam,” says Rus. 
    Collective transport
    Making our intuitive abilities a reality for machines has been the persistent intention since the birth of the field, from straightforward commands for picking up items to the nuances of organizing in a group. 
    One of the main goals of the project is enabling self-assembly to complete the aforementioned tasks of collecting waste, delivering items, and transporting people in the canals — but controlling this movement on the water has been a challenging obstacle. Communication in robotics can often be unstable or have delays, which may worsen the robot coordination. 
    Many control algorithms for this collective transport require direct communication, the relative positions in the group, and the destination of the task — but the team’s new algorithm simply needs one robot to know the desired trajectory and orientation. 
    Normally, the distributed controller running on each robot requires the velocity information of the connected structure (represented by the velocity of the center of the structure), but this requires that each robot knows the relative position to the center of the structure. In the team’s algorithm, they don’t need the relative position, and each robot simply uses its local velocity instead of the velocity of the center of the structure.
    When the leader initiates the movement to the destination, the other robots can therefore estimate the intention of the leader and align their movements. The leader can also steer the rest of the robots by adjusting its input, without any communication between any two robots. 
    In the future, the team plans to use machine learning to estimate (online) the key parameters of the robots. They’re also aiming to explore adaptive controllers that allow for dynamic change to the structure when objects are placed on the boat. Eventually, the boats will also be extended to outdoor water environments, where large disturbances such as currents and waves exist. More

  • in

    Accenture bolsters support for technology and innovation through new MIT-wide initiative

    MIT and Accenture today announced a five-year collaboration that will further advance learning and research through new business convergence insights in technology and innovation. The MIT and Accenture Convergence Initiative for Industry and Technology, established within the School of Engineering, will aim to draw faculty, researchers, and students from across MIT.
    MIT’s alliance with Accenture spans over 15 years and has proven to be paramount in establishing educational programming and training in technology advancement and data analysis. The industry leader has collaborated with MIT across areas including: MIT Professional Education, MIT Sloan Executive Education, MIT Initiative on the Digital Economy, MIT CSAIL Alliances, MIT Horizon, MIT Career Advising and Professional Development, MIT Data Science Lab, MIT Data to AI Lab, the Gabrieli Laboratory, and the Department of Economics Initiative on Technology and the Future of Labor, among others.
    “The world is experiencing disruption beyond what any of us have seen in our lifetimes. In that context, it is more important than ever that academia and industry collaborate to address pressing societal challenges and opportunities,” says MIT President L. Rafael Reif. “Building on MIT’s long relationship with Accenture, we are eager to join forces again now to demonstrate how the convergence of industries and technologies is powering the next wave of change and innovation, and how we can harness and shape these forces for positive impact.”
    Accenture will work with MIT to establish opportunity on multiple fronts: from graduate fellowships awarded to graduate students working on research in industry and technology convergence who are underrepresented, including by race and ethnicity and by gender, to an ambitious educational program targeting Accenture’s 500,000 employees.  
    “As disruptive technologies and ideas continue to blur the boundaries between industries, moving with speed and designing a future that will benefit all requires a different approach,” says Julie Sweet, CEO of Accenture. “Rapid progress will depend on the ability of industries to learn from each other, from technology leaders and from diverse perspectives across business and academia. MIT, with its strengths across science and engineering, the arts, architecture, humanities, social sciences, and management, and its continuing commitment to interdisciplinary programs, is the ideal partner for Accenture to create breakthrough new research, education and thought leadership programs that can help companies and countries seize the opportunity of the convergence of industry, technology and markets and embrace the change it will bring to create more 360-degree value for all.”
    The new MIT and Accenture Convergence Initiative for Industry and Technology will focus on the following offerings:
    Advancing a portfolio of research projects that address technology and industry convergence in the near and long-term. This will include MIT research that is data-driven that connects to topics including AI, knowledge curation, and talent.
    Providing five annual fellowships that will be awarded to graduate students working on research in industry and technology convergence who are underrepresented, including by race and ethnicity and by gender.
    Establishing multiple learning programs including: a digital learning program bringing learnings to the broader Accenture community and leveraging MIT’s most innovative digital learning methodologies; a weeklong program held at MIT (possibly online) for Accenture leadership; a program designed to immerse c-suite executives in the latest convergence technologies; and opportunities for the MIT student community to engage with Accenture thought leaders.
    “Our new collaboration with Accenture, which will build upon prior mutual efforts, is an obvious and wonderful step forward,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “I can’t wait to see the many incredible educational and innovative opportunities launched through this alliance.”
    Sanjay Sarma will serve as chair of the advisory board for the MIT and Accenture Convergence Initiative for Industry and Technology. Sarma is vice president for open learning at MIT and the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor of Mechanical Engineering. Brian Subirana, research scientist and director of the MIT Auto-ID lab, will serve as director of the Initiative.   
    Co-leads of the new Initiative will be Anantha Chandrakasan and Sanjeev Vohra, global lead of Accenture Applied Intelligence, both of whom will work with the advisory board including members from each organization. More

  • in

    Electronic design tool morphs interactive objects

    We’ve come a long way since the first 3D-printed item came to us by way of an eye wash cup, to now being able to rapidly fabricate things like car parts, musical instruments, and even biological tissues and organoids. 
    While much of these objects can be freely designed and quickly made, the addition of electronics to embed things like sensors, chips, and tags usually requires that you design both separately, making it difficult to create items where the added functions are easily integrated with the form. 
    Now, a 3D design environment from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) lets users iterate an object’s shape and electronic function in one cohesive space, to add existing sensors to early-stage prototypes.
    The team tested the system, called MorphSensor, by modeling an N95 mask with a humidity sensor, a temperature-sensing ring, and glasses that monitor light absorption to protect eye health.

    Play video

    MorphSensor automatically converts electronic designs into 3D models, and then lets users iterate on the geometry and manipulate active sensing parts. This might look like a 2D image of a pair of AirPods and a sensor template, where a person could edit the design until the sensor is embedded, printed, and taped onto the item. 
    To test the effectiveness of MorphSensor, the researchers created an evaluation based on standard industrial assembly and testing procedures. The data showed that MorphSensor could match the off-the-shelf sensor modules with small error margins, for both the analog and digital sensors.
    “MorphSensor fits into my long-term vision of something called ‘rapid function prototyping’, with the objective to create interactive objects where the functions are directly integrated with the form and fabricated in one go, even for non-expert users,” says CSAIL PhD student Junyi Zhu, lead author on a new paper about the project. “This offers the promise that, when prototyping, the object form could follow its designated function, and the function could adapt to its physical form.” 
    MorphSensor in action 
    Imagine being able to have your own design lab where, instead of needing to buy new items, you could cost-effectively update your own items using a single system for both design and hardware. 
    For example, let’s say you want to update your face mask to monitor surrounding air quality. Using MorphSensor, users would first design or import the 3D face mask model and sensor modules from either MorphSensor’s database or online open-sourced files. The system would then generate a 3D model with individual electronic components (with airwires connected between them) and color-coding to highlight the active sensing components.  
    Designers can then drag and drop the electronic components directly onto the face mask, and rotate them based on design needs. As a final step, users draw physical wires onto the design where they want them to appear, using the system’s guidance to connect the circuit. 
    Once satisfied with the design, the “morphed sensor” can be rapidly fabricated using an inkjet printer and conductive tape, so it can be adhered to the object. Users can also outsource the design to a professional fabrication house.  
    To test their system, the team iterated on EarPods for sleep tracking, which only took 45 minutes to design and fabricate. They also updated a “weather-aware” ring to provide weather advice, by integrating a temperature sensor with the ring geometry. In addition, they manipulated an N95 mask to monitor its substrate contamination, enabling it to alert its user when the mask needs to be replaced.
    In its current form, MorphSensor helps designers maintain connectivity of the circuit at all times, by highlighting which components contribute to the actual sensing. However, the team notes it would be beneficial to expand this set of support tools even further, where future versions could potentially merge electrical logic of multiple sensor modules together to eliminate redundant components and circuits and save space (or preserve the object form). 
    Zhu wrote the paper alongside MIT graduate student Yunyi Zhu; undergraduates Jiaming Cui, Leon Cheng, Jackson Snowden, and Mark Chounlakone; postdoc Michael Wessely; and Professor Stefanie Mueller. The team will virtually present their paper at the ACM User Interface Software and Technology Symposium. 
    This material is based upon work supported by the National Science Foundation More

  • in

    “What to Expect When You’re Expecting Robots”

    As Covid-19 has made it necessary for people to keep their distance from each other, robots are stepping in to fill essential roles, such as sanitizing warehouses and hospitals, ferrying test samples to laboratories, and serving as telemedicine avatars.
    There are signs that people may be increasingly receptive to robotic help, preferring, at least hypothetically, to be picked up by a self-driving taxi or have their food delivered via robot, to reduce their risk of catching the virus.
    As more intelligent, independent machines make their way into the public sphere, engineers Julie Shah and Laura Major are urging designers to rethink not just how robots fit in with society, but also how society can change to accommodate these new, “working” robots.
    Shah is an associate professor of aeronautics and astronautics at MIT and the associate dean of social and ethical responsibilities of computing in the MIT Schwarzman College of Computing. Major SM ’05 is CTO of Motional, a self-driving car venture supported by automotive companies Hyundai and Aptiv. Together, they have written a new book, “What to Expect When You’re Expecting Robots: The Future of Human-Robot Collaboration,” published this month by Basic Books.
    What we can expect, they write, is that robots of the future will no longer work for us, but with us. They will be less like tools, programmed to carry out specific tasks in controlled environments, as factory automatons and domestic Roombas have been, and more like partners, interacting with and working among people in the more complex and chaotic real world. As such, Shah and Major say that robots and humans will have to establish a mutual understanding.
    “Part of the book is about designing robotic systems that think more like people, and that can understand the very subtle social signals that we provide to each other, that make our world work,” Shah says. “But equal emphasis in the book is on how we have to structure the way we live our lives, from our crosswalks to our social norms, so that robots can more effectively live in our world.”
    Getting to know you
    As robots increasingly enter public spaces, they may do so safely if they have a better understanding of human and social behavior.
    Consider a package delivery robot on a busy sidewalk: The robot may be programmed to give a standard berth to obstacles in its path, such as traffic cones and lampposts. But what if the robot is coming upon a person wheeling a stroller while balancing a cup of coffee? A human passerby would read the social cues and perhaps step to the side to let the stroller by. Could a robot pick up the same subtle signals to change course accordingly?
    Shah believes the answer is yes. As head of the Interactive Robotics Group at MIT, she is developing tools to help robots understand and predict human behavior, such as where people move, what they do, and who they interact with in physical spaces. She’s implemented these tools in robots that can recognize and collaborate with humans in environments such as the factory floor and the hospital ward. She is hoping that robots trained to read social cues can more safely be deployed in more unstructured public spaces.
    Major, meanwhile, has been helping to make robots, and specifically self-driving cars, work safely and reliably in the real world, beyond the controlled, gated environments where most driverless cars operate today. About a year ago, she and Shah met for the first time, at a robotics conference.
    “We were working in parallel universes, me in industry, and Julie in academia, each trying to galvanize understanding for the need to accommodate machines and robots,” Major recalls.
    From that first meeting, the seeds for their new book began quickly to sprout.
    A cyborg city
    In their book, the engineers describe ways that robots and automated systems can perceive and work with humans — but also ways in which our environment and infrastructure can change to accommodate robots.
    A cyborg-friendly city, engineered to manage and direct robots, could avoid scenarios such as the one that played out in San Francisco in 2017. Residents there were seeing an uptick in delivery robots deployed by local technology startups. The robots were causing congestion on city sidewalks and were an unexpected hazard to seniors with disabilities. Lawmakers ultimately enforced strict regulations on the number of delivery robots allowed in the city — a move that improved safety, but potentially at the expense of innovation.
    If in the near future there are to be multiple robots sharing a sidewalk with humans at any given time, Shah and Major propose that cities might consider installing dedicated robot lanes, similar to bike lanes, to avoid accidents between robots and humans. The engineers also envision a system to organize robots in public spaces, similar to the way airplanes keep track of each other in flight.
    In 1965, the Federal Aviation Agency was created, partly in response to a catastrophic crash between two planes flying through a cloud over the Grand Canyon. Prior to that crash, airplanes were virtually free to fly where they pleased. The FAA began organizing airplanes in the sky through innovations like the traffic collision avoidance system, or TCAS — a system onboard most planes today, that detects other planes outfitted with a universal transponder. TCAS alerts the pilot of nearby planes, and automatically charts a path, independent of ground control, for the plane to take in order to avoid a collision.
    Similarly, Shah and Major say that robots in public spaces could be designed with a sort of universal sensor that enables them to see and communicate with each other, regardless of their software platform or manufacturer. This way, they might stay clear of certain areas, avoiding potential accidents and congestion, if they sense robots nearby.
    “There could also be transponders for people that broadcast to robots,” Shah says. “For instance, crossing guards could use batons that can signal any robot in the vicinity to pause so that it’s safe for children to cross the street.”
    Whether we are ready for them or not, the trend is clear: The robots are coming, to our sidewalks, our grocery stores, and our homes. And as the book’s title suggests, preparing for these new additions to society will take some major changes, in our perception of technology, and in our infrastructure.
    “It takes a village to raise a child to be a well-adjusted member of society, capable of realizing his or her full potential,” write Shah and Major. “So, too, a robot.” More