More stories

  • in

    Engineers use artificial intelligence to capture the complexity of breaking waves

    Waves break once they swell to a critical height, before cresting and crashing into a spray of droplets and bubbles. These waves can be as large as a surfer’s point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex to predict.

    Now, MIT engineers have found a new way to model how waves break. The team used machine learning along with data from wave-tank experiments to tweak equations that have traditionally been used to predict wave behavior. Engineers typically rely on such equations to help them design resilient offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.

    The updated model made more accurate predictions of how and when waves break, the researchers found. For instance, the model estimated a wave’s steepness just before breaking, and its energy and frequency after breaking, more accurately than the conventional wave equations.

    Their results, published today in the journal Nature Communications, will help scientists understand how a breaking wave affects the water around it. Knowing precisely how these waves interact can help hone the design of offshore structures. It can also improve predictions for how the ocean interacts with the atmosphere. Having better estimates of how waves break can help scientists predict, for instance, how much carbon dioxide and other atmospheric gases the ocean can absorb.

    “Wave breaking is what puts air into the ocean,” says study author Themis Sapsis, an associate professor of mechanical and ocean engineering and an affiliate of the Institute for Data, Systems, and Society at MIT. “It may sound like a detail, but if you multiply its effect over the area of the entire ocean, wave breaking starts becoming fundamentally important to climate prediction.”

    The study’s co-authors include lead author and MIT postdoc Debbie Eeltink, Hubert Branger and Christopher Luneau of Aix-Marseille University, Amin Chabchoub of Kyoto University, Jerome Kasparian of the University of Geneva, and T.S. van den Bremer of Delft University of Technology.

    Learning tank

    To predict the dynamics of a breaking wave, scientists typically take one of two approaches: They either attempt to precisely simulate the wave at the scale of individual molecules of water and air, or they run experiments to try and characterize waves with actual measurements. The first approach is computationally expensive and difficult to simulate even over a small area; the second requires a huge amount of time to run enough experiments to yield statistically significant results.

    The MIT team instead borrowed pieces from both approaches to develop a more efficient and accurate model using machine learning. The researchers started with a set of equations that is considered the standard description of wave behavior. They aimed to improve the model by “training” the model on data of breaking waves from actual experiments.

    “We had a simple model that doesn’t capture wave breaking, and then we had the truth, meaning experiments that involve wave breaking,” Eeltink explains. “Then we wanted to use machine learning to learn the difference between the two.”

    The researchers obtained wave breaking data by running experiments in a 40-meter-long tank. The tank was fitted at one end with a paddle which the team used to initiate each wave. The team set the paddle to produce a breaking wave in the middle of the tank. Gauges along the length of the tank measured the water’s height as waves propagated down the tank.

    “It takes a lot of time to run these experiments,” Eeltink says. “Between each experiment you have to wait for the water to completely calm down before you launch the next experiment, otherwise they influence each other.”

    Safe harbor

    In all, the team ran about 250 experiments, the data from which they used to train a type of machine-learning algorithm known as a neural network. Specifically, the algorithm is trained to compare the real waves in experiments with the predicted waves in the simple model, and based on any differences between the two, the algorithm tunes the model to fit reality.

    After training the algorithm on their experimental data, the team introduced the model to entirely new data — in this case, measurements from two independent experiments, each run at separate wave tanks with different dimensions. In these tests, they found the updated model made more accurate predictions than the simple, untrained model, for instance making better estimates of a breaking wave’s steepness.

    The new model also captured an essential property of breaking waves known as the “downshift,” in which the frequency of a wave is shifted to a lower value. The speed of a wave depends on its frequency. For ocean waves, lower frequencies move faster than higher frequencies. Therefore, after the downshift, the wave will move faster. The new model predicts the change in frequency, before and after each breaking wave, which could be especially relevant in preparing for coastal storms.

    “When you want to forecast when high waves of a swell would reach a harbor, and you want to leave the harbor before those waves arrive, then if you get the wave frequency wrong, then the speed at which the waves are approaching is wrong,” Eeltink says.

    The team’s updated wave model is in the form of an open-source code that others could potentially use, for instance in climate simulations of the ocean’s potential to absorb carbon dioxide and other atmospheric gases. The code can also be worked into simulated tests of offshore platforms and coastal structures.

    “The number one purpose of this model is to predict what a wave will do,” Sapsis says. “If you don’t model wave breaking right, it would have tremendous implications for how structures behave. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors.”

    This research is supported, in part, by the Swiss National Science Foundation, and by the U.S. Office of Naval Research. More

  • in

    Study: With masking and distancing in place, NFL stadium openings in 2020 had no impact on local Covid-19 infections

    As with most everything in the world, football looked very different in 2020. As the Covid-19 pandemic unfolded, many National Football League (NFL) games were played in empty stadiums, while other stadiums opened to fans at significantly reduced capacity, with strict safety protocols in place.

    At the time it was unclear what impact such large sporting events would have on Covid-19 case counts, particularly at a time when vaccination against the virus was not widely available.

    Now, MIT engineers have taken a look back at the NFL’s 2020 regular season and found that for this specific period during the pandemic, opening stadiums to fans while requiring face coverings, social distancing, and other measures had no impact on the number of Covid-19 infections in those stadiums’ local counties.

    As they write in a new paper appearing this week in the Proceedings of the National Academy of Sciences, “the benefits of providing a tightly controlled outdoor spectating environment — including masking and distancing requirements — counterbalanced the risks associated with opening.”

    The study concentrates on the NFL’s 2020 regular season (September 2020 to early January 2021), at a time when earlier strains of the virus dominated, before the rise of more transmissible Delta and Omicron variants. Nevertheless, the results may inform decisions on whether and how to hold large outdoor gatherings in the face of future public health crises.

    “These results show that the measures adopted by the NFL were effective in safely opening stadiums,” says study author Anette “Peko” Hosoi, the Neil and Jane Pappalardo Professor of Mechanical Engineering at MIT. “If case counts start to rise again, we know what to do: mask people, put them outside, and distance them from each other.”

    The study’s co-authors are members of MIT’s Institue for Data, Systems, and Society (IDSS), and include Bernardo García Bulle, Dennis Shen, and Devavrat Shah, the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science (EECS).

    Preseason patterns

    Last year a group led by the University of Southern Mississippi compared Covid-19 case counts in the counties of NFL stadiums that allowed fans in, versus those that did not. Their analysis showed that stadiums that opened to large numbers of fans led to “tangible increases” in the local county’s number of Covid-19 cases.

    But there are a number of factors in addition to a stadium’s opening that can affect case counts, including local policies, mandates, and attitudes. As the MIT team writes, “it is not at all obvious that one can attribute the differences in case spikes to the stadiums given the enormous number of confounding factors.”

    To truly isolate the effects of a stadium’s opening, one could imagine tracking Covid cases in a county with an open stadium through the 2020 season, then turning back the clock, closing the stadium, then tracking that same county’s Covid cases through the same season, all things being equal.

    “That’s the perfect experiment, with the exception that you would need a time machine,” Hosoi says.

    As it turns out, the next best thing is synthetic control — a statistical method that is used to determine the effect of an “intervention” (such as the opening of a stadium) compared with the exact same scenario without that intervention.

    In synthetic control, researchers use a weighted combination of groups to construct a “synthetic” version of an actual  scenario. In this case, the actual scenario is a county such as Dallas that hosts an open stadium. A synthetic version would be a county that looks similar to Dallas, only without a stadium. In the context of this study, a county that “looks” like Dallas has a similar preseason pattern of Covid-19 cases.

    To construct a synthetic Dallas, the researchers looked for surrounding counties without stadiums, that had similar Covid-19 trajectories leading up to the 2020 football season. They combined these counties in a way that best fit Dallas’ actual case trajectory. They then used data from the combined counties to calculate the number of Covid cases for this synthetic Dallas through the season, and compared these counts to the real Dallas.

    The team carried out this analysis for every “stadium county.” They determined a county to be a stadium county if more than 10 percent of a stadium’s fans came from that county, which the researchers estimated based on attendance data provided by the NFL.

    “Go outside”

    Of the stadiums included in the study, 13 were closed through the regular season, while 16 opened with reduced capacity and multiple pandemic requirements in place, such as required masking, distanced seating, mobile ticketing, and enhanced cleaning protocols.

    The researchers found the trajectory of infections in all stadium counties mirrored that of synthetic counties, showing that the number of infections would have been the same if the stadiums had remained closed. In other words, they found no evidence that NFL stadium openings led to any increase in local Covid case counts.

    To check that their method wasn’t missing any case spikes, they tested it on a known superspreader: the Sturgis Motorcycle Rally, which was held in August of 2020. The analysis successfully picked up an increase in cases in Meade, the host county, compared to a synthetic counterpart, in the two weeks following the rally.

    Surprisingly, the researchers found that several stadium counties’ case counts dipped slightly compared to their synthetic counterparts. In these counties — including Hamilton, Ohio, home of the Cincinnati Bengals — it appeared that opening the stadium to fans was tied to a dip in Covid-19 infections. Hosoi has a guess as to why:

    “These are football communities with dedicated fans. Rather than stay home alone, those fans may have gone to a sports bar or hosted indoor football gatherings if the stadium had not opened,” Hosoi proposes. “Opening the stadium under those circumstances would have been beneficial to the community because it makes people go outside.”

    The team’s analysis also revealed another connection: Counties with similar Covid trajectories also shared similar politics. To illustrate this point, the team mapped the county-wide temporal trajectories of Covid case counts in Ohio in 2020 and found them to be a strong predictor of the state’s 2020 electoral map.

    “That is not a coincidence,” Hosoi notes. “It tells us that local political leanings determined the temporal trajectory of the pandemic.”

    The team plans to apply their analysis to see how other factors may have influenced the pandemic.

    “Covid is a different beast [today],” she says. “Omicron is more transmissive, and more of the population is vaccinated. It’s possible we’d find something different if we ran this analysis on the upcoming season, and I think we probably should try.” More

  • in

    Physics and the machine-learning “black box”

    Machine-learning algorithms are often referred to as a “black box.” Once data are put into an algorithm, it’s not always known exactly how the algorithm arrives at its prediction. This can be particularly frustrating when things go wrong. A new mechanical engineering (MechE) course at MIT teaches students how to tackle the “black box” problem, through a combination of data science and physics-based engineering.

    In class 2.C161 (Physical Systems Modeling and Design Using Machine Learning), Professor George Barbastathis demonstrates how mechanical engineers can use their unique knowledge of physical systems to keep algorithms in check and develop more accurate predictions.

    “I wanted to take 2.C161 because machine-learning models are usually a “black box,” but this class taught us how to construct a system model that is informed by physics so we can peek inside,” explains Crystal Owens, a mechanical engineering graduate student who took the course in spring 2021.

    As chair of the Committee on the Strategic Integration of Data Science into Mechanical Engineering, Barbastathis has had many conversations with mechanical engineering students, researchers, and faculty to better understand the challenges and successes they’ve had using machine learning in their work.

    “One comment we heard frequently was that these colleagues can see the value of data science methods for problems they are facing in their mechanical engineering-centric research; yet they are lacking the tools to make the most out of it,” says Barbastathis. “Mechanical, civil, electrical, and other types of engineers want a fundamental understanding of data principles without having to convert themselves to being full-time data scientists or AI researchers.”

    Additionally, as mechanical engineering students move on from MIT to their careers, many will need to manage data scientists on their teams someday. Barbastathis hopes to set these students up for success with class 2.C161.

    Bridging MechE and the MIT Schwartzman College of Computing

    Class 2.C161 is part of the MIT Schwartzman College of Computing “Computing Core.” The goal of these classes is to connect data science and physics-based engineering disciplines, like mechanical engineering. Students take the course alongside 6.C402 (Modeling with Machine Learning: from Algorithms to Applications), taught by professors of electrical engineering and computer science Regina Barzilay and Tommi Jaakkola.

    The two classes are taught concurrently during the semester, exposing students to both fundamentals in machine learning and domain-specific applications in mechanical engineering.

    In 2.C161, Barbastathis highlights how complementary physics-based engineering and data science are. Physical laws present a number of ambiguities and unknowns, ranging from temperature and humidity to electromagnetic forces. Data science can be used to predict these physical phenomena. Meanwhile, having an understanding of physical systems helps ensure the resulting output of an algorithm is accurate and explainable.

    “What’s needed is a deeper combined understanding of the associated physical phenomena and the principles of data science, machine learning in particular, to close the gap,” adds Barbastathis. “By combining data with physical principles, the new revolution in physics-based engineering is relatively immune to the “black box” problem facing other types of machine learning.”

    Equipped with a working knowledge of machine-learning topics covered in class 6.C402 and a deeper understanding of how to pair data science with physics, students are charged with developing a final project that solves for an actual physical system.

    Developing solutions for real-world physical systems

    For their final project, students in 2.C161 are asked to identify a real-world problem that requires data science to address the ambiguity inherent in physical systems. After obtaining all relevant data, students are asked to select a machine-learning method, implement their chosen solution, and present and critique the results.

    Topics this past semester ranged from weather forecasting to the flow of gas in combustion engines, with two student teams drawing inspiration from the ongoing Covid-19 pandemic.

    Owens and her teammates, fellow graduate students Arun Krishnadas and Joshua David John Rathinaraj, set out to develop a model for the Covid-19 vaccine rollout.

    “We developed a method of combining a neural network with a susceptible-infected-recovered (SIR) epidemiological model to create a physics-informed prediction system for the spread of Covid-19 after vaccinations started,” explains Owens.

    The team accounted for various unknowns including population mobility, weather, and political climate. This combined approach resulted in a prediction of Covid-19’s spread during the vaccine rollout that was more reliable than using either the SIR model or a neural network alone.

    Another team, including graduate student Yiwen Hu, developed a model to predict mutation rates in Covid-19, a topic that became all too pertinent as the delta variant began its global spread.

    “We used machine learning to predict the time-series-based mutation rate of Covid-19, and then incorporated that as an independent parameter into the prediction of pandemic dynamics to see if it could help us better predict the trend of the Covid-19 pandemic,” says Hu.

    Hu, who had previously conducted research into how vibrations on coronavirus protein spikes affect infection rates, hopes to apply the physics-based machine-learning approaches he learned in 2.C161 to his research on de novo protein design.

    Whatever the physical system students addressed in their final projects, Barbastathis was careful to stress one unifying goal: the need to assess ethical implications in data science. While more traditional computing methods like face or voice recognition have proven to be rife with ethical issues, there is an opportunity to combine physical systems with machine learning in a fair, ethical way.

    “We must ensure that collection and use of data are carried out equitably and inclusively, respecting the diversity in our society and avoiding well-known problems that computer scientists in the past have run into,” says Barbastathis.

    Barbastathis hopes that by encouraging mechanical engineering students to be both ethics-literate and well-versed in data science, they can move on to develop reliable, ethically sound solutions and predictions for physical-based engineering challenges. More

  • in

    Design’s new frontier

    In the 1960s, the advent of computer-aided design (CAD) sparked a revolution in design. For his PhD thesis in 1963, MIT Professor Ivan Sutherland developed Sketchpad, a game-changing software program that enabled users to draw, move, and resize shapes on a computer. Over the course of the next few decades, CAD software reshaped how everything from consumer products to buildings and airplanes were designed.

    “CAD was part of the first wave in computing in design. The ability of researchers and practitioners to represent and model designs using computers was a major breakthrough and still is one of the biggest outcomes of design research, in my opinion,” says Maria Yang, Gail E. Kendall Professor and director of MIT’s Ideation Lab.

    Innovations in 3D printing during the 1980s and 1990s expanded CAD’s capabilities beyond traditional injection molding and casting methods, providing designers even more flexibility. Designers could sketch, ideate, and develop prototypes or models faster and more efficiently. Meanwhile, with the push of a button, software like that developed by Professor Emeritus David Gossard of MIT’s CAD Lab could solve equations simultaneously to produce a new geometry on the fly.

    In recent years, mechanical engineers have expanded the computing tools they use to ideate, design, and prototype. More sophisticated algorithms and the explosion of machine learning and artificial intelligence technologies have sparked a second revolution in design engineering.

    Researchers and faculty at MIT’s Department of Mechanical Engineering are utilizing these technologies to re-imagine how the products, systems, and infrastructures we use are designed. These researchers are at the forefront of the new frontier in design.

    Computational design

    Faez Ahmed wants to reinvent the wheel, or at least the bicycle wheel. He and his team at MIT’s Design Computation & Digital Engineering Lab (DeCoDE) use an artificial intelligence-driven design method that can generate entirely novel and improved designs for a range of products — including the traditional bicycle. They create advanced computational methods to blend human-driven design with simulation-based design.

    “The focus of our DeCoDE lab is computational design. We are looking at how we can create machine learning and AI algorithms to help us discover new designs that are optimized based on specific performance parameters,” says Ahmed, an assistant professor of mechanical engineering at MIT.

    For their work using AI-driven design for bicycles, Ahmed and his collaborator Professor Daniel Frey wanted to make it easier to design customizable bicycles, and by extension, encourage more people to use bicycles over transportation methods that emit greenhouse gases.

    To start, the group gathered a dataset of 4,500 bicycle designs. Using this massive dataset, they tested the limits of what machine learning could do. First, they developed algorithms to group bicycles that looked similar together and explore the design space. They then created machine learning models that could successfully predict what components are key in identifying a bicycle style, such as a road bike versus a mountain bike.

    Once the algorithms were good enough at identifying bicycle designs and parts, the team proposed novel machine learning tools that could use this data to create a unique and creative design for a bicycle based on certain performance parameters and rider dimensions.

    Ahmed used a generative adversarial network — or GAN — as the basis of this model. GAN models utilize neural networks that can create new designs based on vast amounts of data. However, using GAN models alone would result in homogeneous designs that lack novelty and can’t be assessed in terms of performance. To address these issues in design problems, Ahmed has developed a new method which he calls “PaDGAN,” performance augmented diverse GAN.

    “When we apply this type of model, what we see is that we can get large improvements in the diversity, quality, as well as novelty of the designs,” Ahmed explains.

    Using this approach, Ahmed’s team developed an open-source computational design tool for bicycles freely available on their lab website. They hope to further develop a set of generalizable tools that can be used across industries and products.

    Longer term, Ahmed has his sights set on loftier goals. He hopes the computational design tools he develops could lead to “design democratization,” putting more power in the hands of the end user.

    “With these algorithms, you can have more individualization where the algorithm assists a customer in understanding their needs and helps them create a product that satisfies their exact requirements,” he adds.

    Using algorithms to democratize the design process is a goal shared by Stefanie Mueller, an associate professor in electrical engineering and computer science and mechanical engineering.

    Personal fabrication

    Platforms like Instagram give users the freedom to instantly edit their photographs or videos using filters. In one click, users can alter the palette, tone, and brightness of their content by applying filters that range from bold colors to sepia-toned or black-and-white. Mueller, X-Window Consortium Career Development Professor, wants to bring this concept of the Instagram filter to the physical world.

    “We want to explore how digital capabilities can be applied to tangible objects. Our goal is to bring reprogrammable appearance to the physical world,” explains Mueller, director of the HCI Engineering Group based out of MIT’s Computer Science and Artificial Intelligence Laboratory.

    Mueller’s team utilizes a combination of smart materials, optics, and computation to advance personal fabrication technologies that would allow end users to alter the design and appearance of the products they own. They tested this concept in a project they dubbed “Photo-Chromeleon.”

    First, a mix of photochromic cyan, magenta, and yellow dies are airbrushed onto an object — in this instance, a 3D sculpture of a chameleon. Using software they developed, the team sketches the exact color pattern they want to achieve on the object itself. An ultraviolet light shines on the object to activate the dyes.

    To actually create the physical pattern on the object, Mueller has developed an optimization algorithm to use alongside a normal office projector outfitted with red, green, and blue LED lights. These lights shine on specific pixels on the object for a given period of time to physically change the makeup of the photochromic pigments.

    “This fancy algorithm tells us exactly how long we have to shine the red, green, and blue light on every single pixel of an object to get the exact pattern we’ve programmed in our software,” says Mueller.

    Giving this freedom to the end user enables limitless possibilities. Mueller’s team has applied this technology to iPhone cases, shoes, and even cars. In the case of shoes, Mueller envisions a shoebox embedded with UV and LED light projectors. Users could put their shoes in the box overnight and the next day have a pair of shoes in a completely new pattern.

    Mueller wants to expand her personal fabrication methods to the clothes we wear. Rather than utilize the light projection technique developed in the PhotoChromeleon project, her team is exploring the possibility of weaving LEDs directly into clothing fibers, allowing people to change their shirt’s appearance as they wear it. These personal fabrication technologies could completely alter consumer habits.

    “It’s very interesting for me to think about how these computational techniques will change product design on a high level,” adds Mueller. “In the future, a consumer could buy a blank iPhone case and update the design on a weekly or daily basis.”

    Computational fluid dynamics and participatory design

    Another team of mechanical engineers, including Sili Deng, the Brit (1961) & Alex (1949) d’Arbeloff Career Development Professor, are developing a different kind of design tool that could have a large impact on individuals in low- and middle-income countries across the world.

    As Deng walked down the hallway of Building 1 on MIT’s campus, a monitor playing a video caught her eye. The video featured work done by mechanical engineers and MIT D-Lab on developing cleaner burning briquettes for cookstoves in Uganda. Deng immediately knew she wanted to get involved.

    “As a combustion scientist, I’ve always wanted to work on such a tangible real-world problem, but the field of combustion tends to focus more heavily on the academic side of things,” explains Deng.

    After reaching out to colleagues in MIT D-Lab, Deng joined a collaborative effort to develop a new cookstove design tool for the 3 billion people across the world who burn solid fuels to cook and heat their homes. These stoves often emit soot and carbon monoxide, leading not only to millions of deaths each year, but also worsening the world’s greenhouse gas emission problem.

    The team is taking a three-pronged approach to developing this solution, using a combination of participatory design, physical modeling, and experimental validation to create a tool that will lead to the production of high-performing, low-cost energy products.

    Deng and her team in the Deng Energy and Nanotechnology Group use physics-based modeling for the combustion and emission process in cookstoves.

    “My team is focused on computational fluid dynamics. We use computational and numerical studies to understand the flow field where the fuel is burned and releases heat,” says Deng.

    These flow mechanics are crucial to understanding how to minimize heat loss and make cookstoves more efficient, as well as learning how dangerous pollutants are formed and released in the process.

    Using computational methods, Deng’s team performs three-dimensional simulations of the complex chemistry and transport coupling at play in the combustion and emission processes. They then use these simulations to build a combustion model for how fuel is burned and a pollution model that predicts carbon monoxide emissions.

    Deng’s models are used by a group led by Daniel Sweeney in MIT D-Lab to test the experimental validation in prototypes of stoves. Finally, Professor Maria Yang uses participatory design methods to integrate user feedback, ensuring the design tool can actually be used by people across the world.

    The end goal for this collaborative team is to not only provide local manufacturers with a prototype they could produce themselves, but to also provide them with a tool that can tweak the design based on local needs and available materials.

    Deng sees wide-ranging applications for the computational fluid dynamics her team is developing.

    “We see an opportunity to use physics-based modeling, augmented with a machine learning approach, to come up with chemical models for practical fuels that help us better understand combustion. Therefore, we can design new methods to minimize carbon emissions,” she adds.

    While Deng is utilizing simulations and machine learning at the molecular level to improve designs, others are taking a more macro approach.

    Designing intelligent systems

    When it comes to intelligent design, Navid Azizan thinks big. He hopes to help create future intelligent systems that are capable of making decisions autonomously by using the enormous amounts of data emerging from the physical world. From smart robots and autonomous vehicles to smart power grids and smart cities, Azizan focuses on the analysis, design, and control of intelligent systems.

    Achieving such massive feats takes a truly interdisciplinary approach that draws upon various fields such as machine learning, dynamical systems, control, optimization, statistics, and network science, among others.

    “Developing intelligent systems is a multifaceted problem, and it really requires a confluence of disciplines,” says Azizan, assistant professor of mechanical engineering with a dual appointment in MIT’s Institute for Data, Systems, and Society (IDSS). “To create such systems, we need to go beyond standard approaches to machine learning, such as those commonly used in computer vision, and devise algorithms that can enable safe, efficient, real-time decision-making for physical systems.”

    For robot control to work in the complex dynamic environments that arise in the real world, real-time adaptation is key. If, for example, an autonomous vehicle is going to drive in icy conditions or a drone is operating in windy conditions, they need to be able to adapt to their new environment quickly.

    To address this challenge, Azizan and his collaborators at MIT and Stanford University have developed a new algorithm that combines adaptive control, a powerful methodology from control theory, with meta learning, a new machine learning paradigm.

    “This ‘control-oriented’ learning approach outperforms the existing ‘regression-oriented’ methods, which are mostly focused on just fitting the data, by a wide margin,” says Azizan.

    Another critical aspect of deploying machine learning algorithms in physical systems that Azizan and his team hope to address is safety. Deep neural networks are a crucial part of autonomous systems. They are used for interpreting complex visual inputs and making data-driven predictions of future behavior in real time. However, Azizan urges caution.

    “These deep neural networks are only as good as their training data, and their predictions can often be untrustworthy in scenarios not covered by their training data,” he says. Making decisions based on such untrustworthy predictions could lead to fatal accidents in autonomous vehicles or other safety-critical systems.

    To avoid these potentially catastrophic events, Azizan proposes that it is imperative to equip neural networks with a measure of their uncertainty. When the uncertainty is high, they can then be switched to a “safe policy.”

    In pursuit of this goal, Azizan and his collaborators have developed a new algorithm known as SCOD — Sketching Curvature of Out-of-Distribution Detection. This framework could be embedded within any deep neural network to equip them with a measure of their uncertainty.

    “This algorithm is model-agnostic and can be applied to neural networks used in various kinds of autonomous systems, whether it’s drones, vehicles, or robots,” says Azizan.

    Azizan hopes to continue working on algorithms for even larger-scale systems. He and his team are designing efficient algorithms to better control supply and demand in smart energy grids. According to Azizan, even if we create the most efficient solar panels and batteries, we can never achieve a sustainable grid powered by renewable resources without the right control mechanisms.

    Mechanical engineers like Ahmed, Mueller, Deng, and Azizan serve as the key to realizing the next revolution of computing in design.

    “MechE is in a unique position at the intersection of the computational and physical worlds,” Azizan says. “Mechanical engineers build a bridge between theoretical, algorithmic tools and real, physical world applications.”

    Sophisticated computational tools, coupled with the ground truth mechanical engineers have in the physical world, could unlock limitless possibilities for design engineering, well beyond what could have been imagined in those early days of CAD. More

  • in

    3 Questions: Peko Hosoi on the data-driven reasoning behind MIT’s Covid-19 policies for the fall

    As students, faculty, and staff prepare for a full return to the MIT campus in the weeks ahead, procedures for entering buildings, navigating classrooms and labs, and interacting with friends and colleagues will likely take some getting used to.

    The Institute recently reinforced its policies for indoor masking and has also continued to require regular testing for people who live, work, or study on campus — procedures that apply to both vaccinated and unvaccinated individuals. Vaccination is required for all students, faculty, and staff on campus unless a medical or religious exemption is granted.

    These and other policies adopted by MIT to control the spread of Covid-19 have been informed by modeling efforts from a volunteer group of MIT faculty, students, and postdocs. The collaboration, dubbed Isolat, was co-founded by Anette “Peko” Hosoi, the Neil and Jane Pappalardo Professor of Mechanical Engineering and associate dean in the School of Engineering.

    The group, which is organized through MIT’s Institute for Data, Systems, and Society (IDSS), has run numerous models to show how measures such as mask wearing, testing, ventilation, and quarantining could affect Covid-19’s spread. These models have helped to shape MIT’s Covid-19 policies throughout the pandemic, including its procedures for returning to campus this fall.

    Hosoi spoke with MIT News about the data-backed reasoning behind some of these procedures, including indoor masking and regular testing, and how a “generous community” will help MIT safely weather the virus and its variants.

    Q: Take us through how you have been modeling Covid-19 and its variants, in regard to helping MIT shape its Covid policies. What’s the approach you’ve taken, and why?

    A: The approach we’re taking uses a simple counting exercise developed in IDSS to estimate the balance of testing, masking, and vaccination that is required to keep the virus in check. The underlying objective is to find infected people faster, on average, than they can infect others, which is captured in a simple algebraic expression. Our objective can be accomplished either by speeding up the rate of finding infected people (i.e. increasing testing frequency) or slowing down the rate of infection (i.e. increasing masking and vaccination) or by a combination of both. To give you a sense of the numbers, balances for different levels of testing are shown in the chart below for a vaccine efficacy of 67 percent and a contagious period of 18 days (which are the CDC’s latest parameters for the Delta variant).

    The vertical axis shows the now-famous reproduction number R0, i.e. the average number of people that one infected person will infect throughout the course of their illness. These R0 are averages for the population, and in specific circumstances the spreading could be more than that.

    Each blue line represents a different testing frequency: Below the line, the virus is controlled; above the line, it spreads. For example, the dotted blue line shows the boundary if we rely solely on vaccination with no testing. In that case, even if everyone is vaccinated, we can only control up to an R0 of about 3.  Unfortunately, the CDC places R0 of the Delta variant somewhere between 5 and 9, so vaccination alone is insufficient to control the spread. (As an aside, this also means that given the efficacy estimates for the current vaccines, herd immunity is not possible.)

    Next consider the dashed blue line, which represents the stability boundary if we test everyone once per week. If our vaccination rate is greater than about 90 percent, testing one time per week can control even the CDC’s most pessimistic estimate for the Delta variant’s R0.

    Q: In returning to campus over the next few weeks, indoor masking and regular testing are required of every MIT community member, even those who are vaccinated. What in your modeling has shown that each of these policies is necessary?

    A: Given that the chart above shows that vaccination and weekly testing are sufficient to control the virus, one should certainly ask “Why have we reinstated indoor masking?” The answer is related to the fact that, as a university, our population turns over once a year; every September we bring in a few thousand new people. Those people are coming from all over the world, and some of them may not have had the opportunity to get vaccinated yet. The good news is that MIT Medical has vaccines and will be administering them to any unvaccinated students as soon as they arrive; the bad news is that, as we all know, it takes three to five weeks for resistance to build up, depending on the vaccine. This means that we should think of August and September as a transition period during which the vaccination rates may fluctuate as new people arrive. 

    The other revelation that has informed our policies for September is the recent report from the CDC that infected vaccinated people carry roughly the same viral load as unvaccinated infected people. This suggests that vaccinated people — although they are highly unlikely to get seriously ill — are a consequential part of the transmission chain and can pass the virus along to others. So, in order to avoid giving the virus to people who are not yet fully vaccinated during the transition period, we all need to exercise a little extra care to give the newly vaccinated time for their immune systems to ramp up. 

    Q: As the fall progresses, what signs are you looking for that might shift decisions on masking and testing on campus?

    A: Eventually we will have to shift responsibility toward individuals rather than institutions, and allow people to make decisions about masks and testing based on their own risk tolerance. The success of the vaccines in suppressing severe illness will enable us to shift to a position in which our objective is not necessarily to control the spread of the virus, but rather to reduce the risk of serious outcomes to an acceptable level. There are many people who believe we need to make this adjustment and wean ourselves off pandemic living. They are right; we cannot continue like this forever. However, we have not played all our cards yet, and, in my opinion, we need to carefully consider what’s left in our hand before we abdicate institutional responsibility.

    The final ace we have to play is vaccinating kids. It is important to remember that we have many people in our community with kids who are too young to be vaccinated and, understandably, those parents do not want to bring Covid home to their children. Furthermore, our campus is not just a workplace; it is also home to thousands of people, some of whom have children living in our residences or attending an MIT childcare center. Given that context, and the high probability that a vaccine will be approved for children in the near future, it is my belief that our community has the empathy and fortitude to try to keep the virus in check until parents have the option to protect their children with vaccines. 

    Bearing in mind that children constitute an unprotected portion of our population, let me return to the original question and speculate on the fate of masks and testing in the fall. Regarding testing, the analysis suggests that we cannot give that up entirely if we would like to control the spread of the virus. Second, control of the virus is not the only benefit we get from testing. It also gives us situational awareness, serves as an early warning beacon, and provides information that individual members of the community can use as they make decisions about their own risk budget. Personally, I’ve been testing for a year now and I find it easy and reassuring. Honestly, it’s nice to know that I’m Covid-free before I see friends (outside!) or go home to my family.

    Regarding masks, there is always uncertainty around whether a new variant will arise or whether vaccine efficacy will fade, but, given the current parameters and our analysis, my hope is that we will be in a position to provide some relief on the mask mandate once the incoming members of our population have been fully vaccinated. I also suspect that whenever the mask mandate is lifted, masks are not likely to go away. There are certainly situations in which I will continue to wear a mask regardless of the mandate, and many in our community will continue to feel safer wearing masks even when they are not required.

    I believe that we are a generous community and that we will be willing to take precautions to help keep each other healthy. The students who were on campus last year did an outstanding job, and they have given me a tremendous amount of faith that we can be considerate and good to one another even in extremely trying times.

    Previous item
    Next item More