More stories

  • in

    MIT Schwarzman College of Computing awards named professorships to two faculty members

    The MIT Stephen A. Schwarzman College of Computing has awarded two inaugural chaired appointments to Dina Katabi and Aleksander Madry in the Department of Electrical Engineering and Computer Science (EECS).

    “These distinguished endowed professorships recognize the extraordinary achievements of our faculty and future potential of their academic careers,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I’m delighted to make these appointments and acknowledge Dina and Aleksander for their contributions to MIT, the college, and EECS, and their efforts to advance research and teaching in computer science, electrical engineering, artificial intelligence, and machine learning.”

    Dina Katabi is the inaugural Thuan (1990) and Nicole Pham Professor. Katabi is being honored as an exceptional faculty member and for her commitment to mentoring students. Her work spans computer networks, wireless sensing, applied machine learning, and digital health. She is especially known for her work on a wireless system that can track human movement even through walls — a technology that has great potential for medical use.

    Katabi is a member of the EECS faculty and is a principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL), as well as director of the Networks at MIT research group and co-director of the MIT Center for Wireless Networks and Mobile Computing. Among other honors, Katabi has received a MacArthur Fellowship, the Association for Computing Machinery (ACM) Prize in Computing, the ACM Grace Murray Hopper Award, two Test of Time Awards from the ACM’s Special Interest Group on Data Communications, a National Science Foundation CAREER Award, and a Sloan Research Fellowship. She is an ACM Fellow and was elected to the National Academy of Engineering.

    Aleksander Madry has been named the inaugural Cadence Design Systems Professor. Established by Cadence Design Systems, the purpose of the position is to support outstanding faculty with research and teaching interests in the fields of artificial intelligence, machine learning, or data analytics. Madry’s research spans algorithmic graph theory, optimization, and machine learning. In particular, he has a strong interest in building on existing machine learning techniques to forge a decision-making toolkit that is reliable and well-understood enough to be safely and responsibly deployed in the real world.

    Madry is a member of the EECS faculty, CSAIL, and the Theory of Computation Group, and is the director of MIT’s Center for Deployable Machine Learning, which brings together the broad expertise and focus needed to deploy machine learning systems. More

  • in

    Getting dressed with help from robots

    Basic safety needs in the paleolithic era have largely evolved with the onset of the industrial and cognitive revolutions. We interact a little less with raw materials, and interface a little more with machines. 

    Robots don’t have the same hardwired behavioral awareness and control, so secure collaboration with humans requires methodical planning and coordination. You can likely assume your friend can fill up your morning coffee cup without spilling on you, but for a robot, this seemingly simple task requires careful observation and comprehension of human behavior. 

    Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have recently created a new algorithm to help a robot find efficient motion plans to ensure physical safety of its human counterpart. In this case, the bot helped put a jacket on a human, which could potentially prove to be a powerful tool in expanding assistance for those with disabilities or limited mobility. 

    “Developing algorithms to prevent physical harm without unnecessarily impacting the task efficiency is a critical challenge,” says MIT PhD student Shen Li, a lead author on a new paper about the research. “By allowing robots to make non-harmful impact with humans, our method can find efficient robot trajectories to dress the human with a safety guarantee.”

    Play video

    Robot-assisted dressing could aid those with limited mobility or disabilities.

    Human modeling, safety, and efficiency 

    Proper human modeling — how the human moves, reacts, and responds — is necessary to enable successful robot motion planning in human-robot interactive tasks. A robot can achieve fluent interaction if the human model is perfect, but in many cases, there’s no flawless blueprint. 

    A robot shipped to a person at home, for example, would have a very narrow, “default” model of how a human could interact with it during an assisted dressing task. It wouldn’t account for the vast variability in human reactions, dependent on myriad variables such as personality and habits. A screaming toddler would react differently to putting on a coat or shirt than a frail elderly person, or those with disabilities who might have rapid fatigue or decreased dexterity. 

    If that robot is tasked with dressing, and plans a trajectory solely based on that default model, the robot could clumsily bump into the human, resulting in an uncomfortable experience or even possible injury. However, if it’s too conservative in ensuring safety, it might pessimistically assume that all space nearby is unsafe, and then fail to move, something known as the “freezing robot” problem. 

    To provide a theoretical guarantee of human safety, the team’s algorithm reasons about the uncertainty in the human model. Instead of having a single, default model where the robot only understands one potential reaction, the team gave the machine an understanding of many possible models, to more closely mimic how a human can understand other humans. As the robot gathers more data, it will reduce uncertainty and refine those models.

    To resolve the freezing robot problem, the team redefined safety for human-aware motion planners as either collision avoidance or safe impact in the event of a collision. Often, especially in robot-assisted tasks of activities of daily living, collisions cannot be fully avoided. This allowed the robot to make non-harmful contact with the human to make progress, so long as the robot’s impact on the human is low. With this two-pronged definition of safety, the robot could safely complete the dressing task in a shorter period of time.

    For example, let’s say there are two possible models of how a human could react to dressing. “Model One” is that the human will move up during dressing, and “Model Two” is that the human will move down during dressing. With the team’s algorithm, when the robot is planning its motion, instead of selecting one model, it will try to ensure safety for both models. No matter if the person is moving up or down, the trajectory found by the robot will be safe. 

    To paint a more holistic picture of these interactions, future efforts will focus on investigating the subjective feelings of safety in addition to the physical during the robot-assisted dressing task. 

    “This multifaceted approach combines set theory, human-aware safety constraints, human motion prediction, and feedback control for safe human-robot interaction,” says assistant professor in The Robotics Institute at Carnegie Mellon University Zackory Erickson. “This research could potentially be applied to a wide variety of assistive robotics scenarios, towards the ultimate goal of enabling robots to provide safer physical assistance to people with disabilities.” 

    Li wrote the paper alongside CSAIL postdoc Nadia Figueroa, MIT PhD student Ankit Shah, and MIT Professor Julie A. Shah. They will present the paper virtually at the 2021 Robotics: Science and Systems conference. The work was supported by the Office of Naval Research. More

  • in

    Software to accelerate R&D

    Many scientists and researchers still rely on Excel spreadsheets and lab notebooks to manage data from their experiments. That can work for single experiments, but companies tend to make decisions based on data from multiple experiments, some of which may take place at different labs, with slightly different parameters, and even in different countries.

    The situation often requires scientists to leave the lab bench to spend time gathering and merging data from various experiments. Teams of scientists may also struggle to know what the others have tried and which avenues of research still hold promise.

    Now the startup Uncountable has developed a digital workbook to help scientists get more from experimental data. The company’s platform allows scientists to access data from anywhere, merge data using customized parameters, and create visualizations to share findings with others. The system also integrates models that help scientists test materials more quickly and predict the outcomes of experiments.

    Uncountable’s goal is to accelerate innovation by giving scientists developing new materials and products a better way to use the data that drive decisions.

    “It’s all about saving scientists from the bookkeeping they do today and allowing them to focus on innovation and chemistry,” says Will Tashman ’13, who co-founded the company with Noel Hollingsworth ’13, SM ’14 in 2016.

    Uncountable began by helping customers in the industrial chemical space but has expanded to work with companies formulating new battery materials, making polymers for 3D printing, and identifying promising drug candidates.

    “Our goal internally is, ‘Can we make R&D more efficient by a factor of 10?’” Hollingsworth explains. “Can we imagine a world where instead of getting the Tesla battery that’s going to come out in 2032, you get it next year? That’s the world we want to eventually push to with our software.”

    A winning team

    Hollingsworth and Tashman played on MIT’s basketball team together, with both starting on the 2011-2012 team that won the New England Women’s and Men’s Athletic Conference championship.

    During his time at MIT, Hollingsworth got excited about startups while interning at small companies. He also saw alumni including Dropbox co-founder Drew Houston ’05 speak about entrepreneurship.

    After graduation, Hollingsworth joined sports analytics company Second Spectrum while Tashman joined Apple, but they continued playing basketball together.

    “Playing basketball gave us a really close bond,” Hollingsworth says. “What led us to reconnect was this high level of trust you get when you play together on the same sports team for multiple years that’s just not there in a lot of other environments.”

    The pair also brought on Jason Hirshman, a programmer from Stanford University that Hollingsworth had previously worked with. The founders believed they could build a software platform to improve efficiency in the advanced manufacturing space, but they needed to learn more about specific problems customers were facing.

    Tashman scanned the MIT directory for people who could benefit from their idea and ended up meeting several people that either became Uncountable’s first customers or introduced Uncountable to early customers.

    One of those people was Chris Couch ’92, SM ’93, PhD ’99, who is the senior vice president and CTO of Cooper Standard, a global supplier of transportation and industrial components. Uncountable did its first pilot with Cooper Standard, and the company became one of Uncountable’s highest-profile early customers. Couch also suggested the founders look into using neural networks to improve the formulation and optimization of rubber compounds.

    “We talked to him a lot about why it would and wouldn’t work, and that was really the impetus [for building Uncountable’s platform],” Tashman says. “So, using the MIT network and talking to really smart people in research and development leadership positions at formulation companies was very, very helpful.”

    Uncountable started by helping companies use data around rubber formulation but quickly learned teams formulating chemicals for consumer products, food, and the life sciences had similar processes and problems.

    “The data would be in 1,000 different folders under 10 different names, potentially stored in labs across the world,” Tashman says. “[With Uncountable], it’s all in one place. We offer instant access to information in a very secured, controlled environment. With the data in one place, you can build reports, you can build filters, you can monitor lab activity, and you can use more advanced AI algorithms to try and optimize your experiments.”

    The founders say the system dramatically reduces the time scientists spend combing data from different experiments and lets scientists see the correlations and formulas that others have already explored.

    “There’s various studies showing the crazy number of experiments and trials that are redone because of poor documentation or poor sharing and collaboration,” Tashman says.

    The centralized data-management system also allows companies to apply machine-learning algorithms to their data in new ways, and Uncountable has several custom models integrated into its system.

    “If the data is in the right place and the right size, you all of a sudden unlock a lot more powerful mathematical and statistical tools,” Tashman says.

    Speeding up research

    Carbon is a 3-D printing company that develops resins for consumer goods, automotive applications, and biotech companies. Founded in 2013, Carbon had been using Excel spreadsheets to manage R&D before adopting Uncountable’s solution.

    Uncountable helps Carbon’s scientists save hours each week on data sharing, analysis, and in creating presentations for leadership. When a scientist joins a project, they can see exactly what formulations the team has explored, eliminating duplicate work and making it easier to identify areas where they can dig deeper.

    “Uncountable helps us understand whether we’re exploring enough, what else we might try, and whether there are other considerations,” says Carbon scientist Marie Herring ’11. “We get to that point faster, and it speeds up the whole R&D process.”

    Carbon is one of several 3-D printing companies Uncountable works with. As the founders have realized scientists face similar problems across industries, the company has expanded to work with teams developing energy storage devices and plant-based foods as well as biotech startups and research hospitals. Another customer, Nohbo, is making dissolvable toiletries that could eliminate millions of tons of plastic waste created by hotels each year.

    “To get to these greener, more sustainable products, there’s no magic wand,” Hollingsworth says. “The future isn’t discovered; it’s invented by these hard-working scientists we work with on a day-to-day basis. Getting to help all these partners, not just in one field but every field, has been really amazing.” More

  • in

    Sertac Karaman named director of the Laboratory for Information and Decision Systems

    Sertac Karaman has been named director of the Laboratory for Information and Decision Systems (LIDS), MIT’s longest continuously-running lab. Karaman, an associate professor in the Department of Aeronautics and Astronautics, began his appointment on July 1.

    “This is an extremely exciting time for LIDS, with the tremendous advances in automated decision-making systems and their deployment,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I am delighted to have Sertac in this leadership role with the college, as he looks to build on the storied 80-year history of LIDS and in leading the lab to exciting new breakthroughs.”

    Karaman succeeds John Tsitsiklis, the Clarence J. LeBel Professor of Electrical Engineering. Tsitsiklis, who began his tenure as LIDS director in 2017, stepped down in December 2020 to take a sabbatical. Eytan Modiano, professor of aeronautics and astronautics and associate director of LIDS for the past several years, has been filling in as interim director.

    Karaman’s research interests lie in the broad area of embedded systems and mobile robotics. His recent research has focused on developing planning and control algorithms for autonomous vehicles and autonomy-enabled transportation systems. He has worked on driverless cars, unpiloted aerial vehicles, distributed aerial surveillance systems, air traffic control algorithms, certification and verification of control systems software, and many other research areas.

    In 2007, he was on MIT’s team that built a self-driving car and competed in the DARPA Urban Challenge. His experience with robotic platforms also includes developing an autonomous forklift and fully-autonomous agile drones, and working with Willow Garage’s personal robot, PR2. In 2015, he co-founded Optimus Ride, an MIT-spinoff company based in Boston that develops self-driving vehicle technologies to enable efficient, sustainable, and equitable mobility.

    Karaman studied mechanical engineering and computer engineering as an undergraduate. He earned his master’s in mechanical engineering and his PhD in electrical engineering and computer science from MIT in 2009 and 2012, respectively.

    LIDS was founded in 1940 under the name Servomechanism Lab. Today, LIDS is an interdepartmental research center committed to advancing research and education in the analytical information and decision sciences, specifically systems and control; communications and networks; and inference and statistical data processing. Members of the LIDS community share a common approach to solving problems and recognize the fundamental role that mathematics, physics, and computation play in their research. More

  • in

    The tenured engineers of 2021

    The School of Engineering has announced that MIT has granted tenure to eight members of its faculty in the departments of Chemical Engineering, Electrical Engineering and Computer Science, Materials Science and Engineering, Mechanical Engineering, and Nuclear Science and Engineering.

    “This year’s newly tenured faculty are truly inspiring,” says Anantha Chandrakasan, dean of the School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “Their work as educators and scholars has shown an incredible commitment to teaching and research — they have each had a tremendous impact in their fields and within School of Engineering community.”

    This year’s newly tenured associate professors are:

    Mohammad Alizadeh, in the Department of Electrical Engineering and Computer Science and the MIT Computer Science and Artificial Intelligence Laboratory, focuses his research in the areas of computer networks and systems. His research aims to improve the performance, robustness, and ease of management of future networks and cloud computing systems. His current research spans three areas of networking: learning-based resource management for networked systems, programmable networks, and algorithms and protocols for data center networks. He is also broadly interested in performance modeling and analysis of computer systems and bridging theory and practice in computer system design.

    Kwanghun Chung, in the Department of Chemical Engineering, the Institute for Medical Engineering and Science, and the Picower Institute, is devoted to developing and applying novel technologies for holistic understanding of large-scale complex biological systems. His research team develops a host of methods that enable identification of multi-scale functional networks and interrogation of their system-wide, multifactorial interactions. He applies these technologies for studying brain function and dysfunction. His research interests include neuroscience, medical imaging, brain mapping, high-throughput technologies, polymer science, tissue engineering, microfluidics.

    Areg Danagoulian, in the Department of Nuclear Science and Engineering, focuses his current research on nuclear physics applications in nuclear security. This includes technical problems in nuclear nonproliferation, technologies for treaty verification, nuclear safeguards, and cargo security. His current research areas include nuclear disarmament verification via resonant phenomena and novel nuclear detection concepts.

    Ruonan Han, in the Department of Electrical Engineering and Computer Science, is a core faculty member of the Microsystems Technology Laboratories. His research aims at pushing the speed limits of microelectronic circuits in order to bridge the “terahertz gap” between the microwave and infrared domains. He is also interested in innovative interplays among electronics, electromagnetics, and quantum physics for the development of high-frequency, large-scale microsystems, which enable new applications in sensing, metrology, security, and communication.Heather J. Kulik, in the Department of Chemical Engineering, leverages computational modeling to aid the discovery of new materials and mechanisms. Her group advances data-driven machine learning models to enable rapid design of open shell transition metal complexes. She advances fundamental theories to enable low-cost, accurate modeling of quantum mechanical properties of transition metal complexes and software for high-throughput screening to reveal design principles and develop data-driven machine learning models for the rapid design of open shell transition metal complexes. Her group uses these tools to bridge the gap from heterogeneous to homogeneous and enzyme catalysis. The methods she develops enable the prediction of new materials properties in seconds, the exploration of million-compound design spaces, and the identification of design rules and exceptions that go beyond intuition.Elsa Olivetti, in the Department of Materials Science and Engineering, focuses her research on sustainable and scalable materials design, manufacturing, and end-of-life recovery within the larger context in which materials are used. She is especially interested in linking strategies to reduce the environmental burden of materials across different length scales — from atoms and molecules to industrial processes and materials markets. She conducts work to inform our understanding of the complex and nuanced implications of substitution, dematerialization, and waste mining on materials sustainability. 

    Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering, leads the Manipulation and Mechanisms Lab at MIT (MCube), researching autonomous dexterous manipulation and robot automation. He is also associate head of house at MIT’s Sidney-Pacific graduate dorm, where he lives with his family. He graduated in mathematics (2005) and telecommunication engineering (2006) from the Universitat Politecnica de Catalunya and earned his PhD (2013) from the Robotics Institute at Carnegie Mellon University. Rodriguez has received Best Paper Awards at conferences RSS’11, ICRA’13, RSS’18, IROS’18, RSS’19, and ICRA’21, and the 2018 Best Manipulation System Paper Award from Amazon, and the 2020 IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award. He has been a finalist for best paper awards at IROS’16, IROS’18, ICRA’20, RSS’20, and ICRA’21. He led Team MIT-Princeton in the Amazon Robotics Challenge between 2015 and 2017, and received Faculty Research Awards from Amazon in 2018, 2019, and 2020, and from Google in 2020. He is also the recipient of the 2020 IEEE Early Academic Career Award in Robotics and Automation. 

    James Swan, in the Department of Chemical Engineering, focuses on how microstructured, in particular nano-particle, materials can be manipulated for the benefit of society. His research on soft matter is broad and has included accurate measurement of biophysical forces and the self-assembly nano-particles in microgravity. He aims to combine theory and simulation to model the fluid mechanics and out-of-equilibrium statistical physics that are fundamental to complex fluids and other soft matter. His other research interests include computational fluid mechanics and colloid science, flow properties, biophysical media, and directed self-assembly of nanomaterials. More

  • in

    US Air Force pilots get an artificial intelligence assist with scheduling aircrews

    Take it from U.S. Air Force Captain Kyle McAlpin when he says that scheduling C-17 aircraft crews is a headache. An artificial intelligence research flight commander for the Department of Air Force–MIT AI Accelerator Program, McAlpin is also an experienced C-17 pilot. “You could have a mission change and spend the next 12 hours of your life rebuilding a schedule that works,” he says.

    It’s a pain point for crew of 52 squadrons who operate C-17s, the military cargo aircraft that transport troops and supplies globally. This year, the Air Force marked 4 million flight hours for its C-17 fleet, which comprises 275 U.S. and allied aircraft. Each flight requires scheduling a crew of six on average, though crew requirements vary depending on the mission.

    “Being a scheduler is an additional duty on top of an airman’s main job, such as being a pilot,” says Capt. Ronisha Carter, a Cyberspace Operations officer and the primary airman on a research team spanning the Department of the Air Force (DAF), the MIT Department of Aeronautics and Astronautics, and MIT Lincoln Laboratory. “What we want is for a scheduler to click a button, and an optimal schedule is created.”

    Collaborating with their Air Force sponsor organization, Tron, the team has developed an AI-enabled plugin for the existing C-17 scheduling tool to fulfill that vision. The software plugin automates C-17 aircrew scheduling and optimizes crew resources, and was developed as part of the DAF–MIT AI Accelerator partnership.

    Nearly 7,600 airmen are poised to use the technology once it is rolled out this summer. It is being integrated into the scheduling software, called Puckboard, that C-17 airmen currently use to build schedules two weeks in advance. Prior to Puckboard’s development in 2019, the squadrons had been using whiteboards and spreadsheets to manually plan out schedules. While Puckboard was a major improvement to the paper and pen, it didn’t have the “brains of optimization algorithms” to help schedulers avoid the mentally draining aspects of the task, says Michael Snyder, a software engineer and team lead in the AI Software Architectures and Algorithms Group at MIT Lincoln Laboratory.

    The airmen have many factors to consider as they build the schedules. When is air space available? Who is available to fly given rest requirements, deployments, and vacations? From that subset of available pilots, who is qualified? Some pilots, for example, may not be certified for night flying or air refueling. It’s also up to the scheduler to book training flights to keep pilots qualified in these areas.

    “You have a lot of data and factors and the information is so spread out. It’s not something a human being can do in an efficient manner and the decisions they come to might not make the most efficient use of resources. That’s where AI plays a role,” says Hamsa Balakrishnan, who is the William E. Leonhard (1940) Professor of Aeronautics and Astronautics at MIT and principal investigator of the program.

    The team’s approach to solving this scheduling problem fuses two techniques. The first is integer programming. In this approach, the algorithm solves an optimization problem by using binary (yes or no) decisions to decide whether or not to assign a pilot to an event. An optimal solution maximizes the values assigned to the desired characteristics of a “good” schedule. Among the many desired characteristics, examples include increasing the rate at which pilots make progress toward satisfying their training requirements, or not unnecessarily assigning personnel to a flight who are significantly overqualified for the task at hand.

    Candidate schedules with pilot assignments are then presented to an airman (or an automatic agent), who can accept or reject a schedule. Each time a schedule is accepted, the algorithm is rewarded for its choices, which allows it to recognize successful patterns and improve its decisions over time. This process is called reinforcement learning.

    Training the model has required feeding it a lot of historical C-17 aircrew and flight data. Accessing these data has been one of the greatest challenges, as old datasets were tossed out or housed in legacy systems that were hard to access and difficult to harmonize so that the model could pull from it all. “Once the data is connected, it’s then challenging to enumerate all of the constraints a scheduler considers,” says Matthew Koch, a graduate student in the MIT Operations Research Center.

    For example, it’s straightforward to program the model around explicit constraints, such as the restrictions on how many hours a pilot can fly a day. Coding for implicit constraints is harder, or even impossible, and relies on the insight that an airman brings to the desk, such as knowing that two pilots’ personalities don’t mesh or that the strengths of one pilot will complement another’s weaknesses to build the safest flight.

    That’s where the research team’s relationship with the C-17 pilots has been essential. “There have been so many user interviews,” Carter says. In those interviews, the pilots and research team discuss the nuances of different scheduling outputs — what they liked and disliked, and what they would change about certain decisions that the algorithm made. “Every step of the way, it’s been a very integrated relationship and allows us to improve our algorithm,” she says.

    By design, the technology is an assistant. It’s still up to the human to accept the schedule. This approach, the team hopes, will make the system trusted and accepted by users, some of whom have spent years building their own approach to the problem. “We’re figuring out what buttons or charts we can add to the interface so that our algorithms aren’t black boxes. We want to keep the scheduler in the loop,” says Snyder.  

    Showing fairness and equity in their algorithms is also important. “We want to enable a level of explainability of why someone was scheduled over someone else,” Koch adds.

    That goal is still aspirational. One technique to both improve the algorithm and to provide equity is to have the system present multiple schedules from which an airman can choose. Understanding why a user chooses one over the other allows the researchers to tweak the model further.

    Today, the team is continuing to integrate their plugin with Puckboard and explore how to measure its success. “It’s hard to say that there is one optimal solution; there could be several different, but very good schedules. It’s a bit of a trial-and-error process with users,” Koch says.

    But, summing up the tool’s impact, McAlpin says, “It’s taking rocks out of rucksacks.”

    The technology is particularly helpful under the realities of schedule disruptions. As McAlpin mentioned, an unexpected change can create a frustrating snowball effect, scrapping a two-week schedule that may have taken days to build. The algorithm easily accounts for sudden changes, and it can plan up to six months ahead. Changes are still inevitable, but the system allows airmen to gain more predictability around their schedules.

    The team is considering other applications of their research. Puckboard is used widely across the Air Force for other scheduling needs, though each optimization problem is unique. “It’s a whole different set of efficiencies and a new set of problems, but that’s the exciting thing with these. It’s a nice thing for a researcher. We want to solve real problems,” Balakrishnan says.

    In May, Koch defended his thesis on this project to complete his master’s degree. Sharing the sentiments of his colleagues, Koch says that the seamless collaboration between all three partner organizations in the DAF–MIT AI Accelerator program was invaluable. He himself personifies all three institutes, as an MIT student, a Lincoln Laboratory Military Fellow, and a lieutenant in the Air Force.

    “It’s very cool to see how many people care,” Koch says about the collaborators in the program. “With this program, I see the Air Force letting its guard down and letting others in to help us leverage AI and machine learning to make people’s lives better on a daily basis. As a member of the Air Force, I appreciate that.” More

  • in

    Infrared cameras and artificial intelligence provide insight into boiling

    Boiling is not just for heating up dinner. It’s also for cooling things down. Turning liquid into gas removes energy from hot surfaces, and keeps everything from nuclear power plants to powerful computer chips from overheating. But when surfaces grow too hot, they might experience what’s called a boiling crisis.

    In a boiling crisis, bubbles form quickly, and before they detach from the heated surface, they cling together, establishing a vapor layer that insulates the surface from the cooling fluid above. Temperatures rise even faster and can cause catastrophe. Operators would like to predict such failures, and new research offers insight into the phenomenon using high-speed infrared cameras and machine learning.

    Matteo Bucci, the Norman C. Rasmussen Assistant Professor of Nuclear Science and Engineering at MIT, led the new work, published June 23 in Applied Physics Letters. In previous research, his team spent almost five years developing a technique in which machine learning could streamline relevant image processing. In the experimental setup for both projects, a transparent heater 2 centimeters across sits below a bath of water. An infrared camera sits below the heater, pointed up and recording at 2,500 frames per second with a resolution of about 0.1 millimeter. Previously, people studying the videos would have to manually count the bubbles and measure their characteristics, but Bucci trained a neural network to do the chore, cutting a three-week process to about five seconds. “Then we said, ‘Let’s see if other than just processing the data we can actually learn something from an artificial intelligence,’” Bucci says.

    The goal was to estimate how close the water was to a boiling crisis. The system looked at 17 factors provided by the image-processing AI: the “nucleation site density” (the number of sites per unit area where bubbles regularly grow on the heated surface), as well as, for each video frame, the mean infrared radiation at those sites and 15 other statistics about the distribution of radiation around those sites, including how they’re changing over time. Manually finding a formula that correctly weighs all those factors would present a daunting challenge. But “artificial intelligence is not limited by the speed or data-handling capacity of our brain,” Bucci says. Further, “machine learning is not biased” by our preconceived hypotheses about boiling.

    To collect data, they boiled water on a surface of indium tin oxide, by itself or with one of three coatings: copper oxide nanoleaves, zinc oxide nanowires, or layers of silicon dioxide nanoparticles. They trained a neural network on 85 percent of the data from the first three surfaces, then tested it on 15 percent of the data of those conditions plus the data from the fourth surface, to see how well it could generalize to new conditions. According to one metric, it was 96 percent accurate, even though it hadn’t been trained on all the surfaces. “Our model was not just memorizing features,” Bucci says. “That’s a typical issue in machine learning. We’re capable of extrapolating predictions to a different surface.”

    The team also found that all 17 factors contributed significantly to prediction accuracy (though some more than others). Further, instead of treating the model as a black box that used 17 factors in unknown ways, they identified three intermediate factors that explained the phenomenon: nucleation site density, bubble size (which was calculated from eight of the 17 factors), and the product of growth time and bubble departure frequency (which was calculated from 12 of the 17 factors). Bucci says models in the literature often use only one factor, but this work shows that we need to consider many, and their interactions. “This is a big deal.”

    “This is great,” says Rishi Raj, an associate professor at the Indian Institute of Technology at Patna, who was not involved in the work. “Boiling has such complicated physics.” It involves at least two phases of matter, and many factors contributing to a chaotic system. “It’s been almost impossible, despite at least 50 years of extensive research on this topic, to develop a predictive model,” Raj says. “It makes a lot of sense to us the new tools of machine learning.”

    Researchers have debated the mechanisms behind the boiling crisis. Does it result solely from phenomena at the heating surface, or also from distant fluid dynamics? This work suggests surface phenomena are enough to forecast the event.

    Predicting proximity to the boiling crisis doesn’t only increase safety. It also improves efficiency. By monitoring conditions in real-time, a system could push chips or reactors to their limits without throttling them or building unnecessary cooling hardware. It’s like a Ferrari on a track, Bucci says: “You want to unleash the power of the engine.”

    In the meantime, Bucci hopes to integrate his diagnostic system into a feedback loop that can control heat transfer, thus automating future experiments, allowing the system to test hypotheses and collect new data. “The idea is really to push the button and come back to the lab once the experiment is finished.” Is he worried about losing his job to a machine? “We’ll just spend more time thinking, not doing operations that can be automated,” he says. In any case: “It’s about raising the bar. It’s not about losing the job.” More

  • in

    Designing exploratory robots that collect data for marine scientists

    As the Chemistry-Kayak (affectionately known as the ChemYak) swept over the Arctic estuary waters, Victoria Preston was glued to a monitor in a boat nearby, watching as the robot’s sensors captured new data. She and her team had spent weeks preparing for this deployment. With only a week to work on-site, they were making use of the long summer days to collect thousands of observations of a hypothesized chemical anomaly associated with the annual ice-cover retreat.

    The robot moved up and down the stream, using its chemical sensors to detect the composition of the flowing water. Its many measurements revealed a short-lived but massive influx of greenhouse gases in the water during the annual “flushing” of the estuary as ice thawed and receded. For Preston, the experiment’s success was a heartening affirmation of how robotic platforms can be leveraged to help scientists understand the environment in fundamentally new ways.

    Growing up near the Chesapeake Bay in Maryland, Preston learned about the importance of environmental conservation from a young age. She became passionate about how next-generation technologies could be used as tools to make a difference. In 2016, Preston completed her BS in robotics engineering from Olin College of Engineering.

    “My first research project involved creating a drone that could take noninvasive blow samples from exhaling whales,” Preston says. “Some of our work required us to do automatic detection, which would allow the drone to find the blowhole and track it. Overall, it was a great introduction on how to apply fundamental robotics concepts to the real world.”

    Preston’s undergraduate research inspired her to apply for a Fulbright award, which enabled her to work at the Center for Biorobotics in Tallinn, Estonia, for nine months. There, she worked on a variety of robotics projects, such as training a robotic vehicle to map an enclosed underwater space. “I really enjoyed the experience, and it helped shape the research interests I hold today. It also confirmed that grad school was the right next step for me and the work I wanted to do,” she says.

    Uncovering geochemical hotspots

    After her Fulbright ended, Preston began her PhD in aeronautics and astronautics and applied ocean physics and engineering through a joint program between MIT and the Woods Hole Oceanographic Institution. Her co-advisors, Anna Michel and Nicholas Roy, have helped her pursue both theoretical and experimental questions. “I really wanted to have an advisor relationship with a scientist,” she says. “It was a high priority to me to make sure my work would always be a bridge between science and engineering objectives.”

    “Overall, I see robots as a tool for scientists. They take knowledge, explore, bring back datasets. Then scientists do the actual hard work of extracting meaningful information to solve these hard problems,” says Preston.

    The first two years of her research focused on how to deploy robots in environments and process their collected data. She developed algorithms that could allow the robot to move on its own. “My goal was to figure out how to exploit our knowledge of the world and use it to plan optimal sampling trajectories,” says Preston. “This would allow robots to independently navigate to sample in regions of high interest to scientists.”  

    Improving sampling trajectories becomes a major advantage when researchers are working under limited time or budget constraints. Preston was able to deploy her robot in Massachusetts’ Wareham River to detect dissolved methane and other greenhouse gases, byproducts of a wastewater treatment chemical feedstock and natural processes. “Imagine you have a ground seepage of radiation you’re trying to characterize. As the robot moves around, it might get ‘wafts’ of the radiation,” she says.

    “Our algorithm would update to give the robot a new estimate of where the leak might be. The robot responds by moving to that location, collecting more samples and potentially discovering the biggest hotspot or cause for the leak. It also builds a model we can interpret along the way.” This method is a major advancement in efficient sampling in the marine geochemical sciences, since historic strategies meant collecting random bottle samples to be analyzed later in the lab.

    Adapting to real-world requirements

    In the next phase of her work, Preston has been incorporating an important component — time. This will improve explorations that last over several days. “My previous work made this strong assumption that the robot goes in and by the time it’s done, nothing’s different about the environment. In reality this isn’t true, especially for a moving river,” she says. “We’re now trying to figure out how to better model how a space changes over time.”

    This fall, Preston will be traveling on the Scripps Institution of Oceanography research vessel Roger Revelle to the Guaymas Basin the Gulf of California. The research team will be releasing remotely operated and autonomous underwater robots near the bottom of the basin to investigate how hydrothermal plumes move in the water column. Working closely with engineers from the National Deep Submergence Facility, and in collaboration with her advisers and research colleagues at MIT, Preston will be on board, directing the deployment of the devices.

    “I’m looking forward to demonstrating how our algorithmic developments work in practice. It’s also thrilling to be part of a huge, diverse group that’s willing to try this,” she says.

    Preston is just finishing her fourth year of research, and is starting to look toward the future after her PhD. She plans to continue studying marine and other climate-impacted environments. She is driven by our plethora of unexplored questions about the ocean and hopes to use her knowledge to scratch its surface. She’s drawn to the field of computational sustainability, she says, which is based on “the idea is that machine learning, artificial intelligence, and similar tools can and should be applied to solve some of our most pressing challenges, and that these challenges will in turn change how we think about our tools.”

    “This is a really exciting time to be a roboticist who also cares about the environment — and to be a scientist who has access to new tools for research. Maybe I’m a little overly optimistic, but I believe we’re at a pivotal moment for exploration.” More