More stories

  • in

    James DiCarlo named director of the MIT Quest for Intelligence

    James DiCarlo, the Peter de Florez Professor of Neuroscience, has been appointed to the role of director of the MIT Quest for Intelligence. MIT Quest was launched in 2018 to discover the basis of natural intelligence, create new foundations for machine intelligence, and deliver new tools and technologies for humanity.
    As director, DiCarlo will forge new collaborations with researchers within MIT and beyond to accelerate progress in understanding intelligence and developing the next generation of intelligence tools.
    “We have discovered and developed surprising new connections between natural and artificial intelligence,” says DiCarlo, currently head of the Department of Brain and Cognitive Sciences (BCS). “The scientific understanding of natural intelligence, and advances in building artificial intelligence with positive real-world impact, are interlocked aspects of a unified, collaborative grand challenge, and MIT must continue to lead the way.” 
    Aude Oliva, senior research scientist at the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT director of the MIT-IBM Watson AI Lab, will lead industry engagements as director of MIT Quest Corporate. Nicholas Roy, professor of aeronautics and astronautics and a member of CSAIL, will lead the development of systems to deliver on the mission as director of MIT Quest Systems Engineering. Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, will serve as chair of MIT Quest.
    “The MIT Quest’s leadership team has positioned this initiative to spearhead our understanding of natural and artificial intelligence, and I am delighted that Jim is taking on this role,” says Huttenlocher, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science.
    DiCarlo will step down from his current role as head of BCS, a position he has held for nearly nine years, and will continue as faculty in BCS and as an investigator in the McGovern Institute for Brain Research.
    “Jim has been a highly productive leader for his department, the School of Science, and the Institute at large. I’m excited to see the impact he will make in this new role,” says Nergis Mavalvala, dean of the School of Science and the Curtis and Kathleen Marble Professor of Astrophysics.
    As department head, DiCarlo oversaw significant progress in the department’s scientific and educational endeavors. Roughly a quarter of current BCS faculty were hired on his watch, strengthening the department’s foundations in cognitive, systems, and cellular and molecular brain science. In addition, DiCarlo developed a new departmental emphasis in computation, deepening BCS’s ties with the MIT Schwarzman College of Computing and other MIT units such as the Center for Brains, Minds and Machines. He also developed and leads an NIH-funded graduate training program in computationally-enabled integrative neuroscience. As a result, BCS is one of the few departments in the world that is attempting to decipher, in engineering terms, how the human mind emerges from the biological components of the brain.
    To prepare students for this future, DiCarlo collaborated with BCS Associate Department Head Michale Fee to design and execute a total overhaul of the Course 9 curriculum. In addition, partnering with the Department of Electrical Engineering and Computer Science, BCS developed a new major, Course 6-9 (Computation and Cognition), to fill the rapidly growing interest in this interdisciplinary topic. In only its second year, Course 6-9 already has more than 100 undergraduate majors.
    DiCarlo has also worked tirelessly to build a more open, connected, and supportive culture across the entire BCS community in Building 46. In this work, as in everything, DiCarlo sought to bring people together to address challenges collaboratively. He attributes progress to strong partnerships with Li-Huei Tsai, the Picower Professor of Neuroscience in BCS and director of the Picower Institute for Learning and Memory; Robert Desimone, the Doris and Don Berkey Professor in BCS and director of the McGovern Institute for Brain Research; and to the work of dozens of faculty and staff. For example, in collaboration with associate department head Professor Rebecca Saxe, the department has focused on faculty mentorship of graduate students, and, in collaboration with postdoc officer Professor Mark Bear, the department developed postdoc salary and benefit standards. Both initiatives have become models for the Institute. In recent months, DiCarlo partnered with new associate department head Professor Laura Schulz to constructively focus renewed energy and resources on initiatives to address systemic racism and promote diversity, equity, inclusion, and social justice.
    “Looking ahead, I share Jim’s vision for the research and educational programs of the department, and for enhancing its cohesiveness as a community, especially with regard to issues of diversity, equity, inclusion, and justice,” says Mavalvala. “I am deeply committed to supporting his successor in furthering these goals while maintaining the great intellectual strength of BCS.”
    In his own research, DiCarlo uses a combination of large-scale neurophysiology, brain imaging, optogenetic methods, and high-throughput computational simulations to understand the neuronal mechanisms and cortical computations that underlie human visual intelligence. Working in animal models, he and his research collaborators have established precise connections between the internal workings of the visual system and the internal workings of particular computer vision systems. And they have demonstrated that these science-to-engineering connections lead to new ways to modulate neurons deep in the brain as well as to improved machine vision systems. His lab’s goals are to help develop more human-like machine vision, new neural prosthetics to restore or augment lost senses, new learning strategies, and an understanding of how visual cognition is impaired in agnosia, autism, and dyslexia. 
    DiCarlo earned both a PhD in biomedical engineering and an MD from The Johns Hopkins University in 1998, and completed his postdoc training in primate visual neurophysiology at Baylor College of Medicine. He joined the MIT faculty in 2002.
    A search committee will convene early this year to recommend candidates for the next department head of BCS. DiCarlo will continue to lead the department until that new head is selected. More

  • in

    Model analyzes how viruses escape the immune system

    One reason it’s so difficult to produce effective vaccines against some viruses, including influenza and HIV, is that these viruses mutate very rapidly. This allows them to evade the antibodies generated by a particular vaccine, through a process known as “viral escape.”
    MIT researchers have now devised a new way to computationally model viral escape, based on models that were originally developed to analyze language. The model can predict which sections of viral surface proteins are more likely to mutate in a way that enables viral escape, and it can also identify sections that are less likely to mutate, making them good targets for new vaccines.
    “Viral escape is a big problem,” says Bonnie Berger, the Simons Professor of Mathematics and head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory. “Viral escape of the surface protein of influenza and the envelope surface protein of HIV are both highly responsible for the fact that we don’t have a universal flu vaccine, nor do we have a vaccine for HIV, both of which cause hundreds of thousands of deaths a year.”
    In a study appearing today in Science, Berger and her colleagues identified possible targets for vaccines against influenza, HIV, and SARS-CoV-2. Since that paper was accepted for publication, the researchers have also applied their model to the new variants of SARS-CoV-2 that recently emerged in the United Kingdom and South Africa. That analysis, which has not yet been peer-reviewed, flagged viral genetic sequences that should be further investigated for their potential to escape the existing vaccines, the researchers say.
    Berger and Bryan Bryson, an assistant professor of biological engineering at MIT and a member of the Ragon Institute of MGH, MIT, and Harvard, are the senior authors of the paper, and the lead author is MIT graduate student Brian Hie.
    The language of proteins
    Different types of viruses acquire genetic mutations at different rates, and HIV and influenza are among those that mutate the fastest. For these mutations to promote viral escape, they must help the virus change the shape of its surface proteins so that antibodies can no longer bind to them. However, the protein can’t change in a way that makes it nonfunctional. 
    The MIT team decided to model these criteria using a type of computational model known as a language model, from the field of natural language processing (NLP). These models were originally designed to analyze patterns in language, specifically, the frequency which with certain words occur together. The models can then make predictions of which words could be used to complete a sentence such as “Sally ate eggs for …” The chosen word must be both grammatically correct and have the right meaning. In this example, an NLP model might predict “breakfast,” or “lunch.”
    The researchers’ key insight was that this kind of model could also be applied to biological information such as genetic sequences. In that case, grammar is analogous to the rules that determine whether the protein encoded by a particular sequence is functional or not, and semantic meaning is analogous to whether the protein can take on a new shape that helps it evade antibodies. Therefore, a mutation that enables viral escape must maintain the grammaticality of the sequence but change the protein’s structure in a useful way.
    “If a virus wants to escape the human immune system, it doesn’t want to mutate itself so that it dies or can’t replicate,” Hie says. “It wants to preserve fitness but disguise itself enough so that it’s undetectable by the human immune system.”
    To model this process, the researchers trained an NLP model to analyze patterns found in genetic sequences, which allows it to predict new sequences that have new functions but still follow the biological rules of protein structure. One significant advantage of this kind of modeling is that it requires only sequence information, which is much easier to obtain than protein structures. The model can be trained on a relatively small amount of information — in this study, the researchers used 60,000 HIV sequences, 45,000 influenza sequences, and 4,000 coronavirus sequences.
    “Language models are very powerful because they can learn this complex distributional structure and gain some insight into function just from sequence variation,” Hie says. “We have this big corpus of viral sequence data for each amino acid position, and the model learns these properties of amino acid co-occurrence and co-variation across the training data.”
    Blocking escape
    Once the model was trained, the researchers used it to predict sequences of the coronavirus spike protein, HIV envelope protein, and influenza hemagglutinin (HA) protein that would be more or less likely to generate escape mutations.
    For influenza, the model revealed that the sequences least likely to mutate and produce viral escape were in the stalk of the HA protein. This is consistent with recent studies showing that antibodies that target the HA stalk (which most people infected with the flu or vaccinated against it do not develop) can offer near-universal protection against any flu strain.
    The model’s analysis of coronaviruses suggested that a part of the spike protein called the S2 subunit is least likely to generate escape mutations. The question still remains as to how rapidly the SARS-CoV-2 virus mutates, so it is unknown how long the vaccines now being deployed to combat the Covid-19 pandemic will remain effective. Initial evidence suggests that the virus does not mutate as rapidly as influenza or HIV. However, the researchers recently identified new mutations that have appeared in Singapore, South Africa, and Malaysia, that they believe should be investigated for potential viral escape (these new data are not yet peer-reviewed).
    In their studies of HIV, the researchers found that the V1-V2 hypervariable region of the protein has many possible escape mutations, which is consistent with previous findings, and they also found sequences that would have a lower probability of escape.
    The researchers are now working with others to use their model to identify possible targets for cancer vaccines that stimulate the body’s own immune system to destroy tumors. They say it could also be used to design small-molecule drugs that might be less likely to provoke resistance, for diseases such as tuberculosis.
    “There are so many opportunities, and the beautiful thing is all we need is sequence data, which is easy to produce,” Bryson says.
    The research was funded by a National Defense Science and Engineering Graduate Fellowship from the Department of Defense and a National Science Foundation Graduate Research Fellowship. More

  • in

    Professor Antonio Torralba elected 2021 AAAI Fellow

    Antonio Torralba, faculty head of Artificial Intelligence and Decision Making within the Department of Electrical Engineering and Computer Science (EECS) and the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science, has been selected as a 2021 Fellow by the Association for the Advancement of Artificial Intelligence (AAAI). AAAI Fellows are selected in recognition of their significant and extended contributions to the field (contributions which typically span a decade or more), including technical results, publications, patent awards, and contributions to group efforts.
    Torralba received a degree in telecommunications engineering from Telecom BCN in Spain in 1994 and a PhD in signal, image, and speech processing from the Institut National Polytechnique de Grenoble, France in 2000. From 2000 to 2005, he received postdoc training at both the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. From 2017 to 2020, he was the MIT director of the MIT-IBM Watson AI Lab, and the inaugural director of the MIT Quest for Intelligence from 2018 to 2020. He is currently a member of both CSAIL and the Center for Brains, Minds and Machines.
    Torralba’s research primarily focuses upon computer vision, machine learning, and the challenge of building computer systems that mimic human visual perception; additionally, Torralba is interested in neural networks, common-sense reasoning, computational photography, image databases, the intersections between visual art and computation, and the development of systems that can perceive the world through multiple senses (including audition and touch).
    The author or co-author of over 300 papers, Torralba has been cited over 71,000 times on Google Scholar. He is an associate editor of the International Journal in Computer Vision, and served as program chair for the Computer Vision and Pattern Recognition conference in 2015. He has received the 2008 National Science Foundation Career Award, the Best Student Paper Award at the IEEE Conference on Computer Vision and Pattern Recognition in 2009, and the 2010 J. K. Aggarwal Prize from the International Association for Pattern Recognition. In 2017, he received the Frank Quick Faculty Research Innovation Fellowship and the Louis D. Smullin (’39) Award for Teaching Excellence. Earlier in 2020, he received the PAMI Mark Everingham Prize. More

  • in

    MIT.nano’s Immersion Lab opens for researchers and students

    The MIT.nano Immersion Lab, MIT’s first open-access facility for augmented and virtual reality (AR/VR) and interacting with data, is now open and available to MIT students, faculty, researchers, and external users.
    The powerful set of capabilities is located on the third floor of MIT.nano in a two-story space resembling a black-box theater. The Immersion Lab contains embedded systems and individual equipment and platforms, as well as data capacity to support new modes of teaching and applications such as creating and experiencing immersive environments, human motion capture, 3D scanning for digital assets, 360-degree modeling of spaces, interactive computation and visualization, and interfacing of physical and digital worlds in real-time.
    “Give the MIT community a unique set of tools and their relentless curiosity and penchant for experimentation is bound to create striking new paradigms and open new intellectual vistas. They will probably also invent new tools along the way,” says Vladimir Bulović, the founding faculty director of MIT.nano and the Fariborz Maseeh Chair in Emerging Technology. “We are excited to see what happens when students, faculty, and researchers from different disciplines start to connect and collaborate in the Immersion Lab — activating its virtual realms.”
    A major focus of the lab is to support data exploration, allowing scientists and engineers to analyze and visualize their research at the human scale with large, multidimensional views, enabling visual, haptic, and aural representations. “The facility offers a new and much-needed laboratory to individuals and programs grappling with how to wield, shape, present, and interact with data in innovative ways,” says Brian W. Anthony, the associate director of MIT.nano and faculty lead for the Immersion Lab.
    Massive data is one output of MIT.nano, as the workflow of a typical scientific measurement system within the facility requires iterative acquisition, visualization, interpretation, and data analysis. The Immersion Lab will accelerate the data-centric work of MIT.nano researchers, but also of others who step into its space, driven by their pursuits of science, engineering, art, entertainment, and education.

    MIT.nano Immersion Lab

    Tools and capabilities
    The Immersion Lab not only assembles a variety of advanced hardware and software tools, but is also an instrument in and of itself, says Anthony. The two-story cube, measuring approximately 28 feet on each side, is outfitted with an embedded OptiTrack system that enables precise motion capture via real-time active or passive 3D tracking of objects, as well as full-body motion analysis with the associated software.
    Complementing the built-in systems are stand-alone instruments that study the data, analyze and model the physical world, and generate new, immersive content, including:
    a Matterport Pro2 photogrammetric camera to generate 3D, geographically and dimensionally accurate reconstructions of spaces (Matterport can also be used for augmented reality creation and tagging, virtual reality walkthroughs, and 3D models of the built environment);
    a Lenscloud system that uses 126 cameras and custom software to produce high-volume, 360-degree photogrammetric scans of human bodies or human-scale objects;
    software and hardware tools for content generation and editing, such as 360-degree cameras, 3D animation software, and green screens;
    backpack computers and VR headsets to allow researchers to test and interact with their digital assets in virtual spaces, untethered from a stationary desktop computer; and
    hardware and software to visualize complex and multidimensional datasets, including HP Z8 data science workstations and Dell Alienware gaming workstations.
    Like MIT.nano’s fabrication and characterization facilities, the Immersion Lab is open to researchers from any department, lab, and center at MIT. Expert research staff are available to assist users.
    Support for research, courses, and seminars
    Anthony says the Immersion Lab is already supporting cross-disciplinary research at MIT, working with multiple MIT groups for diverse uses — quantitative geometry measurements of physical prototypes for advanced manufacturing, motion analysis of humans for health and wellness uses, creation of animated characters for arts and theater production, virtual tours of physical spaces, and visualization of fluid and heat flow for architectural design, to name a few.
    The MIT.nano Immersion Lab Gaming Program is a four-year research collaboration between MIT.nano and video game development company NCSOFT that seeks to chart the future of how people interact with the world and each other via hardware and software innovations in gaming technologies. In the program’s first two calls-for-proposals in 2019 and 2020, 12 projects from five different departments were awarded $1.5M of combined research funding. The collaborative proposal selection process by MIT.nano and NCSOFT ensures the awarded projects are developing industrially-impactful advancements, and that MIT researchers are exposed to technical practitioners at NCSOFT.
    The Immersion Lab also partners with the Clinical Research Center (CRC) at the MIT Institute for Medical Engineering and Science to generate a human-centric environment in which to study health and wellness. Through this partnership, the CRC has provided sensors, equipment, and expertise to capture physiological measurements of a human body while immersed in the physical or virtual realm of the Immersion Lab.
    Undergraduate students can use the Immersion Lab through sponsored Undergraduate Research Opportunities Program (UROP) projects. Recent UROP work includes jumping as a new form of locomotion in virtual reality and analyzing human muscle lines using motion capture software. Starting with MIT’s 2021 Independent Activities Period, the Immersion Lab will also offer workshops, short courses, and for-credit classes in the MIT curriculum.
    Members of the MIT community and general public can learn more about the various application areas supported by the Immersion Lab through a new seminar series, Immersed, beginning in February. This monthly event will feature talks by experts in the fields of current work, highlighting future goals to be pursued with the immersive technologies. Slated topical areas include motion in sports, uses for photogrammetry, rehabilitation and prosthetics, and music/performing arts.
    New ways of teaching and learning
    Virtual reality makes it possible for instructors to bring students to environments that are hard to access, either geographically or at scale. New modalities for introducing the language of gaming into education allow for students to discover concepts for themselves.
    As a recent example, William Oliver, associate professor in electrical engineering and computer science, is developing Qubit Arcade to teach core principles of quantum computing via a virtual reality demonstration. Users can create Bloch spheres, control qubit states, measure results, and compose quantum circuits in an intuitive 3D representation with virtualized quantum gates.
    IMES Director Elazer Edelman, the Edward J. Poitras Professor in Medical Engineering and Science, is using the Immersion Lab as a teaching tool for interacting with 3D models of the heart. With the 3D and 4D visualization tools of the Lab, Edelman and his students can see in detail the evolution of congenital heart failure models, something his students could previously only study if they happened upon a case in a cadaver.
    “Software engineers understand how to implement concepts in a digital environment. Artists understand how light interacts with materials and how to draw the eye to a particular feature through contrast and composition. Musicians and composers understand how the human ear responds to sound. Dancers and animators understand human motion. Teachers know how to explain concepts and challenge their students. Hardware engineers know how to manipulate materials and matter to build new physical functionality. All of these fields have something to contribute to the problems we are tackling in the Immersion Lab,” says Anthony.
    A faculty advisory board has been established to help the MIT.nano Immersion Lab identify opportunities enabled by the current tools and those that should be explored with additional software and hardware capabilities. The lab’s advisory board currently comprises seven MIT faculty from six departments. Such broad faculty engagement ensures that the Immersion Lab engages in projects across many disciplines and launches new directions of cross-disciplinary discoveries.
    Visit nanousers.mit.edu/immersion-lab to learn more. More

  • in

    Delivering life-saving oxygen during a pandemic

    At the peak of the Covid-19 outbreak in Italy last spring, doctors and health care professionals were faced with harrowing decisions. Hospitals were running out of ventilators, forcing doctors to choose which patients had the best chance of survival, and which didn’t.
    “It was a very difficult time for Italy,” recalls Daniele Vivona, a mechanical engineering graduate student from Italy. In early March, Vivona and a team of researchers at MIT’s Electrochemical Energy Lab (EEL) started to devise a plan to develop an oxygen concentrator that might one day help hospitals, like those in Italy, deliver oxygen to patients who so desperately need it.
    “Traditionally, our lab uses electrons to break molecules that generate energy carriers,” explains Yang Shao-Horn, professor of mechanical engineering and EEL’s director. “We wanted to figure out how to take our expertise in electrochemistry and use it to create a device to make an oxygen concentrator that can be delivered to patients.”
    Shao-Horn’s team is one of several groups that have been developing technologies to help hospitals around the world provide life-saving oxygen to patients with Covid-19 and other respiratory illnesses.
    A low-cost portable oxygen concentrator
    As a starting point, Shao-Horn and her team at EEL reached out to Boston-area doctors, as well as doctors in Italy and South Korea, to better understand their needs. They set out to make a low-cost and portable oxygen concentrator to improve clinical management in hospitals that were overwhelmed with Covid-19 patients, in addition to providing solutions that could be adopted in places with limited infrastructure, such as field hospitals or developing countries.
    The resulting device resembles a typical electrochemical cell battery. Water and air are pumped through a cell with a cathode that produces negative electrons. The water is passed through a catalytic H2O2 membrane that helps separate oxygen from the air before being positively charged by an anode. After passing through an oxygen compressor, pure oxygen then flows to an oxygen tank, where it can be readily delivered and used to treat patients.
    Postdoc C. John Eom has been leading efforts to improve the cathode and anode in the device, while fellow postdoc Yunguang Zhu has been focusing on the chemistry involved in the H2O2 membrane.
    As the team continues to work on the concentrator, they are looking into various ways it can help doctors save lives — including while transporting patients from the intensive care unit (ICU) to the operating room.
    “We’re hoping to have something portable enough that patients could potentially use the device at home and we provide doctors with more options to address diverse situations that require the delivery of oxygen to patients,” says Eom.
    Open-source ventilator designs
    Stories of Italian hospitals running out of ventilators were the impetus for another MIT-led project known as the MIT Emergency Ventilator Team. “This project started around the time of news reports from Italy describing ventilators being rationed due to shortages, and available data at that time suggested about 10 percent of Covid patients would require an ICU,” alum Alexander Slocum Jr. ’08, SM ’10, PhD ’13 said in April.
    Slocum Jr. worked with his father Alexander Slocum Sr., the Walter M. May and A. Hazel May Professor of Mechanical Engineering, as well as research scientist Nevan Hanumara SM ’06, PhD ’12, and together they developed a plan to release an open-source design that companies worldwide could then use to manufacture low-cost ventilators for emergency use.
    “We realized that as researchers, our best role would be supporting other people who had more capabilities to execute and produce ventilators than we did,” recalls Hanumara. “So, we focused heavily on developing the base requirements for safe low-cost ventilation and, following from this, a reference hardware and software design.”
    The team grew to include MIT graduate students and alumni, including a trio from Professor Daniela Rus’s group in MIT’s Computer Science and Artificial Intelligence Laboratory. They used a design developed in the mechanical engineering class 2.75 (Medical Device Design) back in 2010 as a starting point. Graduate student Kimberly Jung, a West Point graduate who has served in the U.S. Army, acted as the “executive officer,” holding the team together.
    With insights gathered from the clinical community, they developed multiple prototype iterations, wrote code, and conducted animal studies. As the work progressed, it was posted to an open-source site. Within a few months, over 24,000 people had registered to gain access to the site.
    “Since March, there has been a tremendous and humbling international response to our work,” adds Hanumara. The team has refocused to help groups around the world refine the designs and deploy ventilators. From an all-girl robotics team in Afghanistan to groups in New York, Ireland, India, Chile, and Morocco, Hanumara and the MIT Emergency Ventilator team have been helping others develop solutions that fit their own country’s needs.
    Splitting ventilators to treat multiple patients
    Giovanni Traverso, the Karl Van Tassel (1925) Career Development Professor of Mechanical Engineering, together with collaborators from Brigham and Women’s Hospital, Massachusetts General Hospital, and Philips, took a different approach in trying to solve the ventilator shortage. Their effort, led by postdoc Shriya Srinivasan PhD ’20, developed a method to split a ventilator so it can treat two, or potentially more, patients at a time instead of one. Their approach is meant only as a last resort when there aren’t enough ventilators to meet the need.
    “We saw this as an opportunity where we might be able to help hospitals facing ventilator shortages due to Covid-19,” says Traverso. “While other teams were developing new ventilators, our approach was to address situations where people can’t make their own ventilators or augment the capacity of all ventilators. We wanted to help inform how they could amplify their current capacity further.”
    Splitting ventilators between two patients provides a host of logistical issues, including matching flow rates and delivering the same amount of oxygen to two patients who may have different needs. “Previous designs haven’t provided the ability to customize the treatment to each patient, who will invariably present with variable needs,” says Srinivasan. “Our approach focused closely on this aspect and enabled the customization of volume and pressure for each patient.”
    “We knew splitting ventilators was a major challenge, so we aimed to understand what the challenges were and address them to make it feasible to treat multiple patients using one ventilator,” adds Traverso
    To tackle these challenges, Srinivasan and Traverso added two flow valves to the split ventilator. Health-care professionals can use these valves to tailor the flow of oxygen to each individual patient. The team also added new safety measures, including pressure release valves and alarms, to make sure patients don’t receive too much or too little oxygen as their condition changes.
    The research team was able to successfully test their new method with the help of an artificial lung and through simultaneous ventilation of two pigs. As with the MIT Emergency Ventilator Team, the team is working with international groups to bring the split ventilator technology to countries that need additional infrastructure to treat patients with respiratory diseases like Covid-19. Their research was published in Science Translational Medicine and the team started a nonprofit, Project Prana, to help support the dissemination of the work.
    “We’re also working with large health care systems and startups in India, Bangladesh, and Venezuela to bring the system to the rural towns that have run out of ventilators and cannot afford emergency ventilators,” Srinivasan says.
    While the methods were different, these three research teams share a central purpose: to provide oxygen to those whose lives depend on it. Whether it’s through electrochemical reactions, open-source ventilator designs, or splitting ventilators, this research could help hospitals weather further spikes in Covid-19 cases and put solutions in place in the event of future pandemics. More

  • in

    Want cheaper nuclear energy? Turn the design process into a game

    Nuclear energy provides more carbon-free electricity in the United States than solar and wind combined, making it a key player in the fight against climate change. But the U.S. nuclear fleet is aging, and operators are under pressure to streamline their operations to compete with coal- and gas-fired plants.
    One of the key places to cut costs is deep in the reactor core, where energy is produced. If the fuel rods that drive reactions there are ideally placed, they burn less fuel and require less maintenance. Through decades of trial and error, nuclear engineers have learned to design better layouts to extend the life of pricey fuel rods. Now, artificial intelligence is poised to give them a boost.
    Researchers at MIT and Exelon show that by turning the design process into a game, an AI system can be trained to generate dozens of optimal configurations that can make each rod last about 5 percent longer, saving a typical power plant an estimated $3 million a year, the researchers report. The AI system can also find optimal solutions faster than a human, and quickly modify designs in a safe, simulated environment. Their results appear this month in the journal Nuclear Engineering and Design.
    “This technology can be applied to any nuclear reactor in the world,” says the study’s senior author, Koroush Shirvan, an assistant professor in MIT’s Department of Nuclear Science and Engineering. “By improving the economics of nuclear energy, which supplies 20 percent of the electricity generated in the U.S., we can help limit the growth of global carbon emissions and attract the best young talents to this important clean-energy sector.”
    In a typical reactor, fuel rods are lined up on a grid, or assembly, by their levels of uranium and gadolinium oxide within, like chess pieces on a board, with radioactive uranium driving reactions, and rare-earth gadolinium slowing them down. In an ideal layout, these competing impulses balance out to drive efficient reactions. Engineers have tried using traditional algorithms to improve on human-devised layouts, but in a standard 100-rod assembly there might be an astronomical number of options to evaluate. So far, they’ve had limited success.
    The researchers wondered if deep reinforcement learning, an AI technique that has achieved superhuman mastery at games like chess and Go, could make the screening process go faster. Deep reinforcement learning combines deep neural networks, which excel at picking out patterns in reams of data, with reinforcement learning, which ties learning to a reward signal like winning a game, as in Go, or reaching a high score, as in Super Mario Bros.
    Here, the researchers trained their agent to position the fuel rods under a set of constraints, earning more points with each favorable move. Each constraint, or rule, picked by the researchers reflects decades of expert knowledge rooted in the laws of physics. The agent might score points, for example, by positioning low-uranium rods on the edges of the assembly, to slow reactions there; by spreading out the gadolinium “poison” rods to maintain consistent burn levels; and by limiting the number of poison rods to between 16 and 18.
    “After you wire in rules, the neural networks start to take very good actions,” says the study’s lead author Majdi Radaideh, a postdoc in Shirvan’s lab. “They’re not wasting time on random processes. It was fun to watch them learn to play the game like a human would.”
    Through reinforcement learning, AI has learned to play increasingly complex games as well as or better than humans. But its capabilities remain relatively untested in the real world. Here, the researchers show that reinforcement learning has potentially powerful applications.
    “This study is an exciting example of transferring an AI technique for playing board games and video games to helping us solve practical problems in the world,” says study co-author Joshua Joseph, a research scientist at the MIT Quest for Intelligence.
    Exelon is now testing a beta version of the AI system in a virtual environment that mimics an assembly within a boiling water reactor, and about 200 assemblies within a pressurized water reactor, which is globally the most common type of reactor. Based in Chicago, Illinois, Exelon owns and operates 21 nuclear reactors across the United States. It could be ready to implement the system in a year or two, a company spokesperson says.
    The study’s other authors are Isaac Wolverton, a MIT senior who joined the project through the Undergraduate Research Opportunities Program; Nicholas Roy and Benoit Forget of MIT; and James Tusar and Ugi Otgonbaatar of Exelon. More

  • in

    Method finds hidden warning signals in measurements collected over time

    When you’re responsible for a multimillion-dollar satellite hurtling through space at thousands of miles per hour, you want to be sure it’s running smoothly. And time series can help.
    A time series is simply a record of a measurement taken repeatedly over time. It can keep track of a system’s long-term trends and short-term blips. Examples include the infamous Covid-19 curve of new daily cases and the Keeling curve that has tracked atmospheric carbon dioxide concentrations since 1958. In the age of big data, “time series are collected all over the place, from satellites to turbines,” says Kalyan Veeramachaneni. “All that machinery has sensors that collect these time series about how they’re functioning.”
    But analyzing those time series, and flagging anomalous data points in them, can be tricky. Data can be noisy. If a satellite operator sees a string of high temperature readings, how do they know whether it’s a harmless fluctuation or a sign that the satellite is about to overheat?
    That’s a problem Veeramachaneni, who leads the Data-to-AI group in MIT’s Laboratory for Information and Decision Systems, hopes to solve. The group has developed a new, deep-learning-based method of flagging anomalies in time series data. Their approach, called TadGAN, outperformed competing methods and could help operators detect and respond to major changes in a range of high-value systems, from a satellite flying through space to a computer server farm buzzing in a basement.
    The research will be presented at this month’s IEEE BigData conference. The paper’s authors include Data-to-AI group members Veeramachaneni, postdoc Dongyu Liu, visiting research student Alexander Geiger, and master’s student Sarah Alnegheimish, as well as Alfredo Cuesta-Infante of Spain’s Rey Juan Carlos University.
    High stakes
    For a system as complex as a satellite, time series analysis must be automated. The satellite company SES, which is collaborating with Veeramachaneni, receives a flood of time series from its communications satellites — about 30,000 unique parameters per spacecraft. Human operators in SES’ control room can only keep track of a fraction of those time series as they blink past on the screen. For the rest, they rely on an alarm system to flag out-of-range values. “So they said to us, ‘Can you do better?’” says Veeramachaneni. The company wanted his team to use deep learning to analyze all those time series and flag any unusual behavior.
    The stakes of this request are high: If the deep learning algorithm fails to detect an anomaly, the team could miss an opportunity to fix things. But if it rings the alarm every time there’s a noisy data point, human reviewers will waste their time constantly checking up on the algorithm that cried wolf. “So we have these two challenges,” says Liu. “And we need to balance them.”
    Rather than strike that balance solely for satellite systems, the team endeavored to create a more general framework for anomaly detection — one that could be applied across industries. They turned to deep-learning systems called generative adversarial networks (GANs), often used for image analysis.
    A GAN consists of a pair of neural networks. One network, the “generator,” creates fake images, while the second network, the “discriminator,” processes images and tries to determine whether they’re real images or fake ones produced by the generator. Through many rounds of this process, the generator learns from the discriminator’s feedback and becomes adept at creating hyper-realistic fakes. The technique is deemed “unsupervised” learning, since it doesn’t require a prelabeled dataset where images come tagged with their subjects. (Large labeled datasets can be hard to come by.)
    The team adapted this GAN approach for time series data. “From this training strategy, our model can tell which data points are normal and which are anomalous,” says Liu. It does so by checking for discrepancies — possible anomalies — between the real time series and the fake GAN-generated time series. But the team found that GANs alone weren’t sufficient for anomaly detection in time series, because they can fall short in pinpointing the real time series segment against which the fake ones should be compared. As a result, “if you use GAN alone, you’ll create a lot of false positives,” says Veeramachaneni.
    To guard against false positives, the team supplemented their GAN with an algorithm called an autoencoder — another technique for unsupervised deep learning. In contrast to GANs’ tendency to cry wolf, autoencoders are more prone to miss true anomalies. That’s because autoencoders tend to capture too many patterns in the time series, sometimes interpreting an actual anomaly as a harmless fluctuation — a problem called “overfitting.” By combining a GAN with an autoencoder, the researchers crafted an anomaly detection system that struck the perfect balance: TadGAN is vigilant, but it doesn’t raise too many false alarms.
    Standing the test of time series
    Plus, TadGAN beat the competition. The traditional approach to time series forecasting, called ARIMA, was developed in the 1970s. “We wanted to see how far we’ve come, and whether deep learning models can actually improve on this classical method,” says Alnegheimish.
    The team ran anomaly detection tests on 11 datasets, pitting ARIMA against TadGAN and seven other methods, including some developed by companies like Amazon and Microsoft. TadGAN outperformed ARIMA in anomaly detection for eight of the 11 datasets. The second-best algorithm, developed by Amazon, only beat ARIMA for six datasets.
    Alnegheimish emphasized that their goal was not only to develop a top-notch anomaly detection algorithm, but also to make it widely useable. “We all know that AI suffers from reproducibility issues,” she says. The team has made TadGAN’s code freely available, and they issue periodic updates. Plus, they developed a benchmarking system for users to compare the performance of different anomaly detection models.
    “This benchmark is open source, so someone can go try it out. They can add their own model if they want to,” says Alnegheimish. “We want to mitigate the stigma around AI not being reproducible. We want to ensure everything is sound.”
    Veeramachaneni hopes TadGAN will one day serve a wide variety of industries, not just satellite companies. For example, it could be used to monitor the performance of computer apps that have become central to the modern economy. “To run a lab, I have 30 apps. Zoom, Slack, Github — you name it, I have it,” he says. “And I’m relying on them all to work seamlessly and forever.” The same goes for millions of users worldwide.
    TadGAN could help companies like Zoom monitor time series signals in their data center — like CPU usage or temperature — to help prevent service breaks, which could threaten a company’s market share. In future work, the team plans to package TadGAN in a user interface, to help bring state-of-the-art time series analysis to anyone who needs it.
    This research was funded by and completed in collaboration with SES. More

  • in

    MIT to share in $3.2 million grant to create a statewide technician-training program in advanced manufacturing

    At the end of October, the Commonwealth of Massachusetts announced it won a $3.2 million, two-year grant, in collaboration with MIT, community colleges, and state agencies, to prepare workers for stable high-paying jobs in advanced manufacturing. The program, called MassBridge, will create a curriculum that bridges between the Commonwealth’s excellent traditional manufacturing education and the advanced manufacturing needs of today’s economy. Massachusetts will serve as a foundry and pilot for this curriculum, which can later be used by other states to meet the nation’s growing need for these skills.
    Manufacturing is sometimes seen as an unattractive industry for work, where jobs are dirty, dull, and dingy. But the new wave of advanced manufacturing — in areas like robotics, photonics, and 3D printing — are a completely different experience. These are well-paid “middle-skill” jobs that can provide stable employment, good working conditions, and opportunities for advancement. Some estimates say that more than 2 million of these jobs will go unfilled over the next decade. Potential workers need to know about the opportunities, and educational institutions need to build programs that prepare people for these careers. 
    When the U.S. Department of Defense’s Manufacturing Technology office first considered working with a state on middle-skill worker training, the Massachusetts Technology Collaborative, or MassTech, quickly put together a plan. The commonwealth had many elements that made it a strong candidate. Massachusetts had created four regions — Northeast, Southeast, Central, and Western Massachusetts — which worked together to standardize training across the commonwealth. The governor’s skills cabinet coordinated training-related activities across the many branches of state government. And finally, the commonwealth had already committed $100 million in capital equipment grants to support the defense department’s Manufacturing-USA institutes.
    Workforce training activities for one of these institutes, AIM Photonics, is led by MIT Professor Lionel Kimerling. Kimerling’s group IKIM (Initiative for Knowledge and Innovation in Manufacturing) had already won an Office of Naval Research grant to develop technician-training programs in robotics and photonics, and IKIM was also developing online classes. MassTech enlisted IKIM’s help.
    Since the program will rely extensively on digital learning modules, MassTech also turned to the MIT Office of Open Learning (OL). MIT OL has extensive expertise in digital education and learning science. It also has an ongoing commitment to workforce learning through its Abdul Latif Jameel World Education Lab and several educational programs.
    “This award is an important national acknowledgment of Massachusetts’ successes in manufacturing and the competitive training programs that support our industry,” says Secretary Mike Kennealy of the Executive Office of Housing and Economic Development. “The grant from the Department of Defense presents a major opportunity to supercharge our manufacturing sector by engaging new students and adult learners, and helping them develop their skills to better succeed in these emerging industries.”
    MIT’s IKIM and OL will be involved in almost all aspects of the MassBridge project, and they will lead the work on skills roadmapping and curriculum benchmarking. “MIT is well-known for its deep expertise in engineering. We are also known for the pioneering work we’ve done in digital and blended learning. We see our mission as educating not only our students, but also outside MIT,” says Professor Sanjay Sarma, vice president of open learning. “Combining the expertise of MIT, the Commonwealth, and our community college collaborators can make a real difference in the lives of people who do not yet have the skills for well-paying and fulfilling careers.”  
    “The Commonwealth of Massachusetts has a clear vision for recovery of its manufacturing sector by leveraging its science and engineering leadership,” says Kimerling. “MassTech has been a valued partner for MIT, identifying a strong technician workforce as the missing ingredient to success. Our team is privileged and delighted to share our advanced manufacturing knowledge and to engage with outstanding students and faculty in building the vital technician workforce.”
    Other collaborators in the program team include Cape Cod Community College, Quinsigamond Community College, Massachusetts Manufacturing Extension Program, Massachusetts hiring boards, and representatives from state government and the Department of Defense. After initially building and testing the curriculum in Massachusetts, the team plans to make the curriculum available nationwide to any states that wish to adopt it. More