More stories

  • in

    To improve solar and other clean energy tech, look beyond hardware

    To continue reducing the costs of solar energy and other clean energy technologies, scientists and engineers will likely need to focus, at least in part, on improving technology features that are not based on hardware, according to MIT researchers. They describe this finding and the mechanisms behind it today in Nature Energy.

    While the cost of installing a solar energy system has dropped by more than 99 percent since 1980, this new analysis shows that “soft technology” features, such as the codified permitting practices, supply chain management techniques, and system design processes that go into deploying a solar energy plant, contributed only 10 to 15 percent of total cost declines. Improvements to hardware features were responsible for the lion’s share.

    But because soft technology is increasingly dominating the total costs of installing solar energy systems, this trend threatens to slow future cost savings and hamper the global transition to clean energy, says the study’s senior author, Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society (IDSS).

    Trancik’s co-authors include lead author Magdalena M. Klemun, a former IDSS graduate student and postdoc who is now an assistant professor at the Hong Kong University of Science and Technology; Goksin Kavlak, a former IDSS graduate student and postdoc who is now an associate at the Brattle Group; and James McNerney, a former IDSS postdoc and now senior research fellow at the Harvard Kennedy School.

    The team created a quantitative model to analyze the cost evolution of solar energy systems, which captures the contributions of both hardware technology features and soft technology features.

    The framework shows that soft technology hasn’t improved much over time — and that soft technology features contributed even less to overall cost declines than previously estimated.

    Their findings indicate that to reverse this trend and accelerate cost declines, engineers could look at making solar energy systems less reliant on soft technology to begin with, or they could tackle the problem directly by improving inefficient deployment processes.  

    “Really understanding where the efficiencies and inefficiencies are, and how to address those inefficiencies, is critical in supporting the clean energy transition. We are making huge investments of public dollars into this, and soft technology is going to be absolutely essential to making those funds count,” says Trancik.

    “However,” Klemun adds, “we haven’t been thinking about soft technology design as systematically as we have for hardware. That needs to change.”

    The hard truth about soft costs

    Researchers have observed that the so-called “soft costs” of building a solar power plant — the costs of designing and installing the plant — are becoming a much larger share of total costs. In fact, the share of soft costs now typically ranges from 35 to 64 percent.

    “We wanted to take a closer look at where these soft costs were coming from and why they weren’t coming down over time as quickly as the hardware costs,” Trancik says.

    In the past, scientists have modeled the change in solar energy costs by dividing total costs into additive components — hardware components and nonhardware components — and then tracking how these components changed over time.

    “But if you really want to understand where those rates of change are coming from, you need to go one level deeper to look at the technology features. Then things split out differently,” Trancik says.

    The researchers developed a quantitative approach that models the change in solar energy costs over time by assigning contributions to the individual technology features, including both hardware features and soft technology features.

    For instance, their framework would capture how much of the decline in system installation costs — a soft cost — is due to standardized practices of certified installers — a soft technology feature. It would also capture how that same soft cost is affected by increased photovoltaic module efficiency — a hardware technology feature.

    With this approach, the researchers saw that improvements in hardware had the greatest impacts on driving down soft costs in solar energy systems. For example, the efficiency of photovoltaic modules doubled between 1980 and 2017, reducing overall system costs by 17 percent. But about 40 percent of that overall decline could be attributed to reductions in soft costs tied to improved module efficiency.

    The framework shows that, while hardware technology features tend to improve many cost components, soft technology features affect only a few.

    “You can see this structural difference even before you collect data on how the technologies have changed over time. That’s why mapping out a technology’s network of cost dependencies is a useful first step to identify levers of change, for solar PV and for other technologies as well,” Klemun notes.  

    Static soft technology

    The researchers used their model to study several countries, since soft costs can vary widely around the world. For instance, solar energy soft costs in Germany are about 50 percent less than those in the U.S.

    The fact that hardware technology improvements are often shared globally led to dramatic declines in costs over the past few decades across locations, the analysis showed. Soft technology innovations typically aren’t shared across borders. Moreover, the team found that countries with better soft technology performance 20 years ago still have better performance today, while those with worse performance didn’t see much improvement.

    This country-by-country difference could be driven by regulation and permitting processes, cultural factors, or by market dynamics such as how firms interact with each other, Trancik says.

    “But not all soft technology variables are ones that you would want to change in a cost-reducing direction, like lower wages. So, there are other considerations, beyond just bringing the cost of the technology down, that we need to think about when interpreting these results,” she says.

    Their analysis points to two strategies for reducing soft costs. For one, scientists could focus on developing hardware improvements that make soft costs more dependent on hardware technology variables and less on soft technology variables, such as by creating simpler, more standardized equipment that could reduce on-site installation time.

    Or researchers could directly target soft technology features without changing hardware, perhaps by creating more efficient workflows for system installation or automated permitting platforms.

    “In practice, engineers will often pursue both approaches, but separating the two in a formal model makes it easier to target innovation efforts by leveraging specific relationships between technology characteristics and costs,” Klemun says.

    “Often, when we think about information processing, we are leaving out processes that still happen in a very low-tech way through people communicating with one another. But it is just as important to think about that as a technology as it is to design fancy software,” Trancik notes.

    In the future, she and her collaborators want to apply their quantitative model to study the soft costs related to other technologies, such as electrical vehicle charging and nuclear fission. They are also interested in better understanding the limits of soft technology improvement, and how one could design better soft technology from the outset.

    This research is funded by the U.S. Department of Energy Solar Energy Technologies Office. More

  • in

    How machine learning models can amplify inequities in medical diagnosis and treatment

    Prior to receiving a PhD in computer science from MIT in 2017, Marzyeh Ghassemi had already begun to wonder whether the use of AI techniques might enhance the biases that already existed in health care. She was one of the early researchers to take up this issue, and she’s been exploring it ever since. In a new paper, Ghassemi, now an assistant professor in MIT’s Department of Electrical Science and Engineering (EECS), and three collaborators based at the Computer Science and Artificial Intelligence Laboratory, have probed the roots of the disparities that can arise in machine learning, often causing models that perform well overall to falter when it comes to subgroups for which relatively few data have been collected and utilized in the training process. The paper — written by two MIT PhD students, Yuzhe Yang and Haoran Zhang, EECS computer scientist Dina Katabi (the Thuan and Nicole Pham Professor), and Ghassemi — was presented last month at the 40th International Conference on Machine Learning in Honolulu, Hawaii.

    In their analysis, the researchers focused on “subpopulation shifts” — differences in the way machine learning models perform for one subgroup as compared to another. “We want the models to be fair and work equally well for all groups, but instead we consistently observe the presence of shifts among different groups that can lead to inferior medical diagnosis and treatment,” says Yang, who along with Zhang are the two lead authors on the paper. The main point of their inquiry is to determine the kinds of subpopulation shifts that can occur and to uncover the mechanisms behind them so that, ultimately, more equitable models can be developed.

    The new paper “significantly advances our understanding” of the subpopulation shift phenomenon, claims Stanford University computer scientist Sanmi Koyejo. “This research contributes valuable insights for future advancements in machine learning models’ performance on underrepresented subgroups.”

    Camels and cattle

    The MIT group has identified four principal types of shifts — spurious correlations, attribute imbalance, class imbalance, and attribute generalization — which, according to Yang, “have never been put together into a coherent and unified framework. We’ve come up with a single equation that shows you where biases can come from.”

    Biases can, in fact, stem from what the researchers call the class, or from the attribute, or both. To pick a simple example, suppose the task assigned to the machine learning model is to sort images of objects — animals in this case — into two classes: cows and camels. Attributes are descriptors that don’t specifically relate to the class itself. It might turn out, for instance, that all the images used in the analysis show cows standing on grass and camels on sand — grass and sand serving as the attributes here. Given the data available to it, the machine could reach an erroneous conclusion — namely that cows can only be found on grass, not on sand, with the opposite being true for camels. Such a finding would be incorrect, however, giving rise to a spurious correlation, which, Yang explains, is a “special case” among subpopulation shifts — “one in which you have a bias in both the class and the attribute.”

    In a medical setting, one could rely on machine learning models to determine whether a person has pneumonia or not based on an examination of X-ray images. There would be two classes in this situation, one consisting of people who have the lung ailment, another for those who are infection-free. A relatively straightforward case would involve just two attributes: the people getting X-rayed are either female or male. If, in this particular dataset, there were 100 males diagnosed with pneumonia for every one female diagnosed with pneumonia, that could lead to an attribute imbalance, and the model would likely do a better job of correctly detecting pneumonia for a man than for a woman. Similarly, having 1,000 times more healthy (pneumonia-free) subjects than sick ones would lead to a class imbalance, with the model biased toward healthy cases. Attribute generalization is the last shift highlighted in the new study. If your sample contained 100 male patients with pneumonia and zero female subjects with the same illness, you still would like the model to be able to generalize and make predictions about female subjects even though there are no samples in the training data for females with pneumonia.

    The team then took 20 advanced algorithms, designed to carry out classification tasks, and tested them on a dozen datasets to see how they performed across different population groups. They reached some unexpected conclusions: By improving the “classifier,” which is the last layer of the neural network, they were able to reduce the occurrence of spurious correlations and class imbalance, but the other shifts were unaffected. Improvements to the “encoder,” one of the uppermost layers in the neural network, could reduce the problem of attribute imbalance. “However, no matter what we did to the encoder or classifier, we did not see any improvements in terms of attribute generalization,” Yang says, “and we don’t yet know how to address that.”

    Precisely accurate

    There is also the question of assessing how well your model actually works in terms of evenhandedness among different population groups. The metric normally used, called worst-group accuracy or WGA, is based on the assumption that if you can improve the accuracy — of, say, medical diagnosis — for the group that has the worst model performance, you would have improved the model as a whole. “The WGA is considered the gold standard in subpopulation evaluation,” the authors contend, but they made a surprising discovery: boosting worst-group accuracy results in a decrease in what they call “worst-case precision.” In medical decision-making of all sorts, one needs both accuracy — which speaks to the validity of the findings — and precision, which relates to the reliability of the methodology. “Precision and accuracy are both very important metrics in classification tasks, and that is especially true in medical diagnostics,” Yang explains. “You should never trade precision for accuracy. You always need to balance the two.”

    The MIT scientists are putting their theories into practice. In a study they’re conducting with a medical center, they’re looking at public datasets for tens of thousands of patients and hundreds of thousands of chest X-rays, trying to see whether it’s possible for machine learning models to work in an unbiased manner for all populations. That’s still far from the case, even though more awareness has been drawn to this problem, Yang says. “We are finding many disparities across different ages, gender, ethnicity, and intersectional groups.”

    He and his colleagues agree on the eventual goal, which is to achieve fairness in health care among all populations. But before we can reach that point, they maintain, we still need a better understanding of the sources of unfairness and how they permeate our current system. Reforming the system as a whole will not be easy, they acknowledge. In fact, the title of the paper they introduced at the Honolulu conference, “Change is Hard,” gives some indications as to the challenges that they and like-minded researchers face. More

  • in

    The tenured engineers of 2023

    In 2023, MIT granted tenure to nine faculty members across the School of Engineering. This year’s tenured engineers hold appointments in the departments of Biological Engineering, Civil and Environmental Engineering, Electrical Engineering and Computer Science (which reports jointly to the School of Engineering and MIT Schwarzman College of Computing), Materials Science and Engineering, and Mechanical Engineering, as well as the Institute for Medical Engineering and Science (IMES).

    “I am truly inspired by this remarkable group of talented faculty members,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “The work they are doing, both in the lab and in the classroom, has made a tremendous impact at MIT and in the wider world. Their important research has applications in a diverse range of fields and industries. I am thrilled to congratulate them on the milestone of receiving tenure.”

    This year’s newly tenured engineering faculty include:

    Michael Birnbaum, Class of 1956 Career Development Professor, associate professor of biological engineering, and faculty member at the Koch Institute for Integrative Cancer Research at MIT, works on understanding and manipulating immune recognition in cancer and infections. By using a variety of techniques to study the antigen recognition of T cells, he and his team aim to develop the next generation of immunotherapies.  
    Tamara Broderick, associate professor of electrical engineering and computer science and member of the MIT Laboratory for Information and Decision Systems (LIDS) and the MIT Institute for Data, Systems, and Society (IDSS), works to provide fast and reliable quantification of uncertainty and robustness in modern data analysis procedures. Broderick and her research group develop data analysis tools with applications in fields, including genetics, economics, and assistive technology. 
    Tal Cohen, associate professor of civil and environmental engineering and mechanical engineering, uses nonlinear solid mechanics to understand how materials behave under extreme conditions. By studying material instabilities, extreme dynamic loading conditions, growth, and chemical coupling, Cohen and her team combine theoretical models and experiments to shape our understanding of the observed phenomena and apply those insights in the design and characterization of material systems. 
    Betar Gallant, Class of 1922 Career Development Professor and associate professor of mechanical engineering, develops advanced materials and chemistries for next-generation lithium-ion and lithium primary batteries and electrochemical carbon dioxide mitigation technologies. Her group’s work could lead to higher-energy and more sustainable batteries for electric vehicles, longer-lasting implantable medical devices, and new methods of carbon capture and conversion. 
    Rafael Jaramillo, Thomas Lord Career Development Professor and associate professor of materials science and engineering, studies the synthesis, properties, and applications of electronic materials, particularly chalcogenide compound semiconductors. His work has applications in microelectronics, integrated photonics, telecommunications, and photovoltaics. 
    Benedetto Marelli, associate professor of civil and environmental engineering, conducts research on the synthesis, assembly, and nanomanufacturing of structural biopolymers. He and his research team develop biomaterials for applications in agriculture, food security, and food safety. 
    Ellen Roche, Latham Family Career Development Professor, an associate professor of mechanical engineering, and a core faculty of IMES, designs and develops implantable, biomimetic therapeutic devices and soft robotics that mechanically assist and repair tissue, deliver therapies, and enable enhanced preclinical testing. Her devices have a wide range of applications in human health, including cardiovascular and respiratory disease. 
    Serguei Saavedra, associate professor of civil and environmental engineering, uses systems thinking, synthesis, and mathematical modeling to study the persistence of ecological systems under changing environments. His theoretical research is used to develop hypotheses and corroborate predictions of how ecological systems respond to climate change. 
    Justin Solomon, associate professor of electrical engineering and computer science and member of the MIT Computer Science and Artificial Intelligence Laboratory and MIT Center for Computational Science and Engineering, works at the intersection of geometry, large-scale optimization, computer graphics, and machine learning. His research has diverse applications in machine learning, computer graphics, and geometric data processing.  More

  • in

    A simpler method for learning to control a robot

    Researchers from MIT and Stanford University have devised a new machine-learning approach that could be used to control a robot, such as a drone or autonomous vehicle, more effectively and efficiently in dynamic environments where conditions can change rapidly.

    This technique could help an autonomous vehicle learn to compensate for slippery road conditions to avoid going into a skid, allow a robotic free-flyer to tow different objects in space, or enable a drone to closely follow a downhill skier despite being buffeted by strong winds.

    The researchers’ approach incorporates certain structure from control theory into the process for learning a model in such a way that leads to an effective method of controlling complex dynamics, such as those caused by impacts of wind on the trajectory of a flying vehicle. One way to think about this structure is as a hint that can help guide how to control a system.

    “The focus of our work is to learn intrinsic structure in the dynamics of the system that can be leveraged to design more effective, stabilizing controllers,” says Navid Azizan, the Esther and Harold E. Edgerton Assistant Professor in the MIT Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS). “By jointly learning the system’s dynamics and these unique control-oriented structures from data, we’re able to naturally create controllers that function much more effectively in the real world.”

    Using this structure in a learned model, the researchers’ technique immediately extracts an effective controller from the model, as opposed to other machine-learning methods that require a controller to be derived or learned separately with additional steps. With this structure, their approach is also able to learn an effective controller using fewer data than other approaches. This could help their learning-based control system achieve better performance faster in rapidly changing environments.

    “This work tries to strike a balance between identifying structure in your system and just learning a model from data,” says lead author Spencer M. Richards, a graduate student at Stanford University. “Our approach is inspired by how roboticists use physics to derive simpler models for robots. Physical analysis of these models often yields a useful structure for the purposes of control — one that you might miss if you just tried to naively fit a model to data. Instead, we try to identify similarly useful structure from data that indicates how to implement your control logic.”

    Additional authors of the paper are Jean-Jacques Slotine, professor of mechanical engineering and of brain and cognitive sciences at MIT, and Marco Pavone, associate professor of aeronautics and astronautics at Stanford. The research will be presented at the International Conference on Machine Learning (ICML).

    Learning a controller

    Determining the best way to control a robot to accomplish a given task can be a difficult problem, even when researchers know how to model everything about the system.

    A controller is the logic that enables a drone to follow a desired trajectory, for example. This controller would tell the drone how to adjust its rotor forces to compensate for the effect of winds that can knock it off a stable path to reach its goal.

    This drone is a dynamical system — a physical system that evolves over time. In this case, its position and velocity change as it flies through the environment. If such a system is simple enough, engineers can derive a controller by hand. 

    Modeling a system by hand intrinsically captures a certain structure based on the physics of the system. For instance, if a robot were modeled manually using differential equations, these would capture the relationship between velocity, acceleration, and force. Acceleration is the rate of change in velocity over time, which is determined by the mass of and forces applied to the robot.

    But often the system is too complex to be exactly modeled by hand. Aerodynamic effects, like the way swirling wind pushes a flying vehicle, are notoriously difficult to derive manually, Richards explains. Researchers would instead take measurements of the drone’s position, velocity, and rotor speeds over time, and use machine learning to fit a model of this dynamical system to the data. But these approaches typically don’t learn a control-based structure. This structure is useful in determining how to best set the rotor speeds to direct the motion of the drone over time.

    Once they have modeled the dynamical system, many existing approaches also use data to learn a separate controller for the system.

    “Other approaches that try to learn dynamics and a controller from data as separate entities are a bit detached philosophically from the way we normally do it for simpler systems. Our approach is more reminiscent of deriving models by hand from physics and linking that to control,” Richards says.

    Identifying structure

    The team from MIT and Stanford developed a technique that uses machine learning to learn the dynamics model, but in such a way that the model has some prescribed structure that is useful for controlling the system.

    With this structure, they can extract a controller directly from the dynamics model, rather than using data to learn an entirely separate model for the controller.

    “We found that beyond learning the dynamics, it’s also essential to learn the control-oriented structure that supports effective controller design. Our approach of learning state-dependent coefficient factorizations of the dynamics has outperformed the baselines in terms of data efficiency and tracking capability, proving to be successful in efficiently and effectively controlling the system’s trajectory,” Azizan says. 

    When they tested this approach, their controller closely followed desired trajectories, outpacing all the baseline methods. The controller extracted from their learned model nearly matched the performance of a ground-truth controller, which is built using the exact dynamics of the system.

    “By making simpler assumptions, we got something that actually worked better than other complicated baseline approaches,” Richards adds.

    The researchers also found that their method was data-efficient, which means it achieved high performance even with few data. For instance, it could effectively model a highly dynamic rotor-driven vehicle using only 100 data points. Methods that used multiple learned components saw their performance drop much faster with smaller datasets.

    This efficiency could make their technique especially useful in situations where a drone or robot needs to learn quickly in rapidly changing conditions.

    Plus, their approach is general and could be applied to many types of dynamical systems, from robotic arms to free-flying spacecraft operating in low-gravity environments.

    In the future, the researchers are interested in developing models that are more physically interpretable, and that would be able to identify very specific information about a dynamical system, Richards says. This could lead to better-performing controllers.

    “Despite its ubiquity and importance, nonlinear feedback control remains an art, making it especially suitable for data-driven and learning-based methods. This paper makes a significant contribution to this area by proposing a method that jointly learns system dynamics, a controller, and control-oriented structure,” says Nikolai Matni, an assistant professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania, who was not involved with this work. “What I found particularly exciting and compelling was the integration of these components into a joint learning algorithm, such that control-oriented structure acts as an inductive bias in the learning process. The result is a data-efficient learning process that outputs dynamic models that enjoy intrinsic structure that enables effective, stable, and robust control. While the technical contributions of the paper are excellent themselves, it is this conceptual contribution that I view as most exciting and significant.”

    This research is supported, in part, by the NASA University Leadership Initiative and the Natural Sciences and Engineering Research Council of Canada. More

  • in

    A faster way to teach a robot

    Imagine purchasing a robot to perform household tasks. This robot was built and trained in a factory on a certain set of tasks and has never seen the items in your home. When you ask it to pick up a mug from your kitchen table, it might not recognize your mug (perhaps because this mug is painted with an unusual image, say, of MIT’s mascot, Tim the Beaver). So, the robot fails.

    “Right now, the way we train these robots, when they fail, we don’t really know why. So you would just throw up your hands and say, ‘OK, I guess we have to start over.’ A critical component that is missing from this system is enabling the robot to demonstrate why it is failing so the user can give it feedback,” says Andi Peng, an electrical engineering and computer science (EECS) graduate student at MIT.

    Peng and her collaborators at MIT, New York University, and the University of California at Berkeley created a framework that enables humans to quickly teach a robot what they want it to do, with a minimal amount of effort.

    When a robot fails, the system uses an algorithm to generate counterfactual explanations that describe what needed to change for the robot to succeed. For instance, maybe the robot would have been able to pick up the mug if the mug were a certain color. It shows these counterfactuals to the human and asks for feedback on why the robot failed. Then the system utilizes this feedback and the counterfactual explanations to generate new data it uses to fine-tune the robot.

    Fine-tuning involves tweaking a machine-learning model that has already been trained to perform one task, so it can perform a second, similar task.

    The researchers tested this technique in simulations and found that it could teach a robot more efficiently than other methods. The robots trained with this framework performed better, while the training process consumed less of a human’s time.

    This framework could help robots learn faster in new environments without requiring a user to have technical knowledge. In the long run, this could be a step toward enabling general-purpose robots to efficiently perform daily tasks for the elderly or individuals with disabilities in a variety of settings.

    Peng, the lead author, is joined by co-authors Aviv Netanyahu, an EECS graduate student; Mark Ho, an assistant professor at the Stevens Institute of Technology; Tianmin Shu, an MIT postdoc; Andreea Bobu, a graduate student at UC Berkeley; and senior authors Julie Shah, an MIT professor of aeronautics and astronautics and the director of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Pulkit Agrawal, a professor in CSAIL. The research will be presented at the International Conference on Machine Learning.

    On-the-job training

    Robots often fail due to distribution shift — the robot is presented with objects and spaces it did not see during training, and it doesn’t understand what to do in this new environment.

    One way to retrain a robot for a specific task is imitation learning. The user could demonstrate the correct task to teach the robot what to do. If a user tries to teach a robot to pick up a mug, but demonstrates with a white mug, the robot could learn that all mugs are white. It may then fail to pick up a red, blue, or “Tim-the-Beaver-brown” mug.

    Training a robot to recognize that a mug is a mug, regardless of its color, could take thousands of demonstrations.

    “I don’t want to have to demonstrate with 30,000 mugs. I want to demonstrate with just one mug. But then I need to teach the robot so it recognizes that it can pick up a mug of any color,” Peng says.

    To accomplish this, the researchers’ system determines what specific object the user cares about (a mug) and what elements aren’t important for the task (perhaps the color of the mug doesn’t matter). It uses this information to generate new, synthetic data by changing these “unimportant” visual concepts. This process is known as data augmentation.

    The framework has three steps. First, it shows the task that caused the robot to fail. Then it collects a demonstration from the user of the desired actions and generates counterfactuals by searching over all features in the space that show what needed to change for the robot to succeed.

    The system shows these counterfactuals to the user and asks for feedback to determine which visual concepts do not impact the desired action. Then it uses this human feedback to generate many new augmented demonstrations.

    In this way, the user could demonstrate picking up one mug, but the system would produce demonstrations showing the desired action with thousands of different mugs by altering the color. It uses these data to fine-tune the robot.

    Creating counterfactual explanations and soliciting feedback from the user are critical for the technique to succeed, Peng says.

    From human reasoning to robot reasoning

    Because their work seeks to put the human in the training loop, the researchers tested their technique with human users. They first conducted a study in which they asked people if counterfactual explanations helped them identify elements that could be changed without affecting the task.

    “It was so clear right off the bat. Humans are so good at this type of counterfactual reasoning. And this counterfactual step is what allows human reasoning to be translated into robot reasoning in a way that makes sense,” she says.

    Then they applied their framework to three simulations where robots were tasked with: navigating to a goal object, picking up a key and unlocking a door, and picking up a desired object then placing it on a tabletop. In each instance, their method enabled the robot to learn faster than with other techniques, while requiring fewer demonstrations from users.

    Moving forward, the researchers hope to test this framework on real robots. They also want to focus on reducing the time it takes the system to create new data using generative machine-learning models.

    “We want robots to do what humans do, and we want them to do it in a semantically meaningful way. Humans tend to operate in this abstract space, where they don’t think about every single property in an image. At the end of the day, this is really about enabling a robot to learn a good, human-like representation at an abstract level,” Peng says.

    This research is supported, in part, by a National Science Foundation Graduate Research Fellowship, Open Philanthropy, an Apple AI/ML Fellowship, Hyundai Motor Corporation, the MIT-IBM Watson AI Lab, and the National Science Foundation Institute for Artificial Intelligence and Fundamental Interactions. More

  • in

    A new way to look at data privacy

    Imagine that a team of scientists has developed a machine-learning model that can predict whether a patient has cancer from lung scan images. They want to share this model with hospitals around the world so clinicians can start using it in diagnosis.

    But there’s a problem. To teach their model how to predict cancer, they showed it millions of real lung scan images, a process called training. Those sensitive data, which are now encoded into the inner workings of the model, could potentially be extracted by a malicious agent. The scientists can prevent this by adding noise, or more generic randomness, to the model that makes it harder for an adversary to guess the original data. However, perturbation reduces a model’s accuracy, so the less noise one can add, the better.

    MIT researchers have developed a technique that enables the user to potentially add the smallest amount of noise possible, while still ensuring the sensitive data are protected.

    The researchers created a new privacy metric, which they call Probably Approximately Correct (PAC) Privacy, and built a framework based on this metric that can automatically determine the minimal amount of noise that needs to be added. Moreover, this framework does not need knowledge of the inner workings of a model or its training process, which makes it easier to use for different types of models and applications.

    In several cases, the researchers show that the amount of noise required to protect sensitive data from adversaries is far less with PAC Privacy than with other approaches. This could help engineers create machine-learning models that provably hide training data, while maintaining accuracy in real-world settings.

    “PAC Privacy exploits the uncertainty or entropy of the sensitive data in a meaningful way,  and this allows us to add, in many cases, an order of magnitude less noise. This framework allows us to understand the characteristics of arbitrary data processing and privatize it automatically without artificial modifications. While we are in the early days and we are doing simple examples, we are excited about the promise of this technique,” says Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and co-author of a new paper on PAC Privacy.

    Devadas wrote the paper with lead author Hanshen Xiao, an electrical engineering and computer science graduate student. The research will be presented at the International Cryptography Conference (Crypto 2023).

    Defining privacy

    A fundamental question in data privacy is: How much sensitive data could an adversary recover from a machine-learning model with noise added to it?

    Differential Privacy, one popular privacy definition, says privacy is achieved if an adversary who observes the released model cannot infer whether an arbitrary individual’s data is used for the training processing. But provably preventing an adversary from distinguishing data usage often requires large amounts of noise to obscure it. This noise reduces the model’s accuracy.

    PAC Privacy looks at the problem a bit differently. It characterizes how hard it would be for an adversary to reconstruct any part of randomly sampled or generated sensitive data after noise has been added, rather than only focusing on the distinguishability problem.

    For instance, if the sensitive data are images of human faces, differential privacy would focus on whether the adversary can tell if someone’s face was in the dataset. PAC Privacy, on the other hand, could look at whether an adversary could extract a silhouette — an approximation — that someone could recognize as a particular individual’s face.

    Once they established the definition of PAC Privacy, the researchers created an algorithm that automatically tells the user how much noise to add to a model to prevent an adversary from confidently reconstructing a close approximation of the sensitive data. This algorithm guarantees privacy even if the adversary has infinite computing power, Xiao says.

    To find the optimal amount of noise, the PAC Privacy algorithm relies on the uncertainty, or entropy, in the original data from the viewpoint of the adversary.

    This automatic technique takes samples randomly from a data distribution or a large data pool and runs the user’s machine-learning training algorithm on that subsampled data to produce an output learned model. It does this many times on different subsamplings and compares the variance across all outputs. This variance determines how much noise one must add — a smaller variance means less noise is needed.

    Algorithm advantages

    Different from other privacy approaches, the PAC Privacy algorithm does not need knowledge of the inner workings of a model, or the training process.

    When implementing PAC Privacy, a user can specify their desired level of confidence at the outset. For instance, perhaps the user wants a guarantee that an adversary will not be more than 1 percent confident that they have successfully reconstructed the sensitive data to within 5 percent of its actual value. The PAC Privacy algorithm automatically tells the user the optimal amount of noise that needs to be added to the output model before it is shared publicly, in order to achieve those goals.

    “The noise is optimal, in the sense that if you add less than we tell you, all bets could be off. But the effect of adding noise to neural network parameters is complicated, and we are making no promises on the utility drop the model may experience with the added noise,” Xiao says.

    This points to one limitation of PAC Privacy — the technique does not tell the user how much accuracy the model will lose once the noise is added. PAC Privacy also involves repeatedly training a machine-learning model on many subsamplings of data, so it can be computationally expensive.  

    To improve PAC Privacy, one approach is to modify a user’s machine-learning training process so it is more stable, meaning that the output model it produces does not change very much when the input data is subsampled from a data pool.  This stability would create smaller variances between subsample outputs, so not only would the PAC Privacy algorithm need to be run fewer times to identify the optimal amount of noise, but it would also need to add less noise.

    An added benefit of stabler models is that they often have less generalization error, which means they can make more accurate predictions on previously unseen data, a win-win situation between machine learning and privacy, Devadas adds.

    “In the next few years, we would love to look a little deeper into this relationship between stability and privacy, and the relationship between privacy and generalization error. We are knocking on a door here, but it is not clear yet where the door leads,” he says.

    “Obfuscating the usage of an individual’s data in a model is paramount to protecting their privacy. However, to do so can come at the cost of the datas’ and therefore model’s utility,” says Jeremy Goodsitt, senior machine learning engineer at Capital One, who was not involved with this research. “PAC provides an empirical, black-box solution, which can reduce the added noise compared to current practices while maintaining equivalent privacy guarantees. In addition, its empirical approach broadens its reach to more data consuming applications.”

    This research is funded, in part, by DSTA Singapore, Cisco Systems, Capital One, and a MathWorks Fellowship. More

  • in

    Statistics, operations research, and better algorithms

    In this day and age, many companies and institutions are not just data-driven, but data-intensive. Insurers, health providers, government agencies, and social media platforms are all heavily dependent on data-rich models and algorithms to identify the characteristics of the people who use them, and to nudge their behavior in various ways.

    That doesn’t mean organizations are always using optimal models, however. Determining efficient algorithms is a research area of its own — and one where Rahul Mazumder happens to be a leading expert.

    Mazumder, an associate professor in the MIT Sloan School of Management and an affiliate of the Operations Research Center, works both to expand the techniques of model-building and to refine models that apply to particular problems. His work pertains to a wealth of areas, including statistics and operations research, with applications in finance, health care, advertising, online recommendations, and more.

    “There is engineering involved, there is science involved, there is implementation involved, there is theory involved, it’s at the junction of various disciplines,” says Mazumder, who is also affiliated with the Center for Statistics and Data Science and the MIT-IBM Watson AI Lab.

    There is also a considerable amount of practical-minded judgment, logic, and common-sense decision-making at play, in order to bring the right techniques to bear on any individual task.

    “Statistics is about having data coming from a physical system, or computers, or humans, and you want to make sense of the data,” Mazumder says. “And you make sense of it by building models because that gives some pattern to a dataset. But of course, there is a lot of subjectivity in that. So, there is subjectivity in statistics, but also mathematical rigor.”

    Over roughly the last decade, Mazumder, often working with co-authors, has published about 40 peer-reviewed papers, won multiple academic awards, collaborated with major companies about their work, and helped advise graduate students. For his research and teaching, Mazumder was granted tenure by MIT last year.

    From deep roots to new tools

    Mazumder grew up in Kolkata, India, where his father was a professor at the Indian Statistical Institute and his mother was a schoolteacher. Mazumder received his undergraduate and master’s degrees from the Indian Statistical Institute as well, although without really focusing on the same areas as his father, whose work was in fluid mechanics.

    For his doctoral work, Mazumder attended Stanford University, where he earned his PhD in 2012. After a year as a postdoc at MIT’s Operations Research Center, he joined the faculty at Columbia University, then moved to MIT in 2015.

    While Mazumder’s work has many facets, his research portfolio does have notable central achievements. Mazumder has helped combine ideas from two branches of optimization to facilitate addressing computational problems in statistics. One of these branches, discrete optimization, uses discrete variables — integers — to find the best candidate among a finite set of options. This can relate to operational efficiency: What is the shortest route someone might take while making a designated set of stops? Convex optimization, on the other hand, encompasses an array of algorithms that can obtain the best solution for what Mazumder calls “nicely behaved” mathematical functions. They are typically applied to optimize continuous decisions in financial portfolio allocation and health care outcomes, among other things.

    In some recent papers, such as “Fast best subset selection: Coordinate descent and local combinatorial optimization algorithms,” co-authored with Hussein Hazimeh and published in Operations Research in 2020, and in “Sparse regression at scale: branch-and-bound rooted in first-order optimization,” co-authored with Hazimeh and A. Saab and published in Mathematical Programming in 2022, Mazumder has found ways to combine ideas from the two branches.

    “The tools and techniques we are using are new for the class of statistical problems because we are combining different developments in convex optimization and exploring that within discrete optimization,” Mazumder says.

    As new as these tools are, however, Mazumder likes working on techniques that “have old roots,” as he puts it. The two types of optimization methods were considered less separate in the 1950s or 1960s, he says, then grew apart.

    “I like to go back and see how things developed,” Mazumder says. “If I look back in history at [older] papers, it’s actually very fascinating. One thing was developed, another was developed, another was developed kind of independently, and after a while you see connections across them. If I go back, I see some parallels. And that actually helps in my thought process.”

    Predictions and parsimony

    Mazumder’s work is often aimed at simplifying the model or algorithm being applied to a problem. In some instances, bigger models would require enormous amounts of processing power, so simpler methods can provide equally good results while using fewer resources. In other cases — ranging from the finance and tech firms Mazumder has sometimes collaborated with — simpler models may work better by having fewer moving parts.

    “There is a notion of parsimony involved,” Mazumder says. Genomic studies aim to find particularly influential genes; similarly, tech giants may benefit from simpler models of consumer behavior, not more complex ones, when they are recommending a movie to you.

    Very often, Mazumder says, modeling “is a very large-scale prediction problem. But we don’t think all the features or attributes are going to be important. A small collection is going to be important. Why? Because if you think about movies, there are not really 20,000 different movies; there are genres of movies. If you look at individual users, there are hundreds of millions of users, but really they are grouped together into cliques. Can you capture the parsimony in a model?”

    One part of his career that does not lend itself to parsimony, Mazumder feels, is crediting others. In conversation he emphasizes how grateful he is to his mentors in academia, and how much of his work is developed in concert with collaborators and, in particular, his students at MIT. 

    “I really, really like working with my students,” Mazumder says. “I perceive my students as my colleagues. Some of these problems, I thought they could not be solved, but then we just made it work. Of course, no method is perfect. But the fact we can use ideas from different areas in optimization with very deep roots, to address problems of core statistics and machine learning interest, is very exciting.”

    Teaching and doing research at MIT, Mazumder says, allows him to push forward on difficult problems — while also being pushed along by the interest and work of others around him.

    “MIT is a very vibrant community,” Mazumder says. “The thing I find really fascinating is, people here are very driven. They want to make a change in whatever area they are working in. And I also feel motivated to do this.” More

  • in

    Why big changes early in life can help later on

    Imagine moving from state to state while growing up in the U.S., transferring between high schools, and eventually attending college out of state. The first two events might seem disruptive, and the third involves departing a local community. And yet, these things may be exactly what helps some people thrive later in life.

    That’s one implication of a newly published study about social networks co-authored by an MIT professor, which finds that so-called long ties — connections between people who otherwise lack any mutual contacts — are highly associated with greater economic success in life. Those long ties are fostered partly by turning points such as moving between states, and switching schools.

    The study, based on a large quantity of Facebook data, both illuminates how productive social networks are structured and identifies specific life events that significantly shape people’s networks.

    “People who have more long ties [on Facebook], and who have stronger long ties, have better economic indicators,” says Dean Eckles, an MIT professor and co-author of a new paper detailing the study’s findings.

    “Our hope is that the study provides better evidence of this really strong relationship, at the scale of the entire U.S,” Eckles says. “There hasn’t really been this sort of investigation into those types of disruptive life events.”

    The paper, “Long ties, disruptive life events, and economic prosperity,” appears in open-access form in Proceedings of the National Academy of Sciences. The authors are Eaman Jahani PhD ’21, a postdoc and lecturer at the University of California at Berkeley, who received his doctorate from MIT’s Institute for Data, Systems, and Society, and the Statistics and Data Science Center; Samuel P. Fraiberger, a data scientist at the World Bank; Michael Bailey, an economist and research scientist manager at Meta Platforms (which operates Facebook); and Eckles, an associate professor of marketing at MIT Sloan School of Management. Jahani, who worked at Meta when the study was conducted, performed the initial research, and the aggregate data analysis protected the privacy of individuals in compliance with regulations.

    On the move

    In recent decades, scholars have often analyzed social networks while building on a 1973 study by Stanford University’s Mark Granovetter, “The Strength of Weak Ties,” one of the 10 most-cited social science papers of all time. In it, Granovetter postulated that a network’s “weak ties”— the people you know less well — are vital. Your best friends may have networks quite similar to your own, but your “weak ties” provide additional connections useful for employment, and more. Granovetter also edited this current paper for PNAS.

    To conduct the study, the scholars mapped all reciprocal interactions among U.S.-based Facebook accounts from December 2020 to June 2021, to build a data-rich picture of social networks in action. The researchers maintain a distinction between “long” and “short” ties; in this definition, long ties have no other mutual connections at all, while short ties have some.

    Ultimately the scholars found that, when assessing everyone who has lived in the same state since 2012, those who had previously moved among U.S. states had 13 percent more long ties on Facebook than those who had not. Similarly, people who had switched high schools had 10 percent more long ties than people who had not.

    Facebook does not have income data for its users, so the scholars used a series of proxy measures to evaluate financial success. People with more long ties tend to live in higher-income areas, have more internet-connected devices, use more expensive mobile phones, and make more donations to charitable causes, compared to those who do not.

    Additionally, the research evaluates whether or not moving among states, or switching schools, is itself what causes people to have more long ties. After all, it could be the case that families who move more often have qualities that lead family members to be more proactive about forging ties with people.

    To examine this, the research team analyzed a subgroup of Facebook users who had switched high schools only when their first high school closed — meaning it was not their choice to change. Those people had 6 percent more long ties than those who had attended the same high schools but not been forced to switch; given this common pool of school attendees forced into divergent circumstances, the evidence suggests that making the school change itself “shapes the proclivity to connect with different communities,” as the scholars write in the paper. 

    “It’s a plausibly random nudge,” Eckles says, “and we find the people who were exposed to these high school closures end up with more long ties. I think that is one of the compelling elements pointing toward a causal story here.”

    Three types of events, same trend

    As the scholars acknowledge in the paper, there are some limitations to the study. Because it focuses on Facebook interactions, the research does not account for offline activities that may sustain social networks. It is also likely that economic success itself shapes people’s social networks, and not just that networks help shape success. Some people may have opportunities to maintain long ties, through professional work or travel, that others do not.

    On the other hand, the study does uncover long-term social network ties that had not been evaluated before, and, as the authors write,”having three different types of events — involving different processes by which people are selected into the disruption — pointing to the same conclusions makes for a more robust and notable pattern.”

    Other scholars in the field believe the study is a notable piece of research. In a commentary on the paper also published in PNAS, Michael Macy, a sociology professor at Cornell University, writes that “the authors demonstrate the importance of contributing to cumulative knowledge by confirming hypotheses derived from foundational theory while at the same time elaborating on what was previously known by digging deeper into the underlying causal mechanisms. In short, the paper is must reading not only for area specialists but for social scientists across the disciplines.”

    For his part, Eckles emphasizes that the researchers are releasing anonymized data from the study, so that other scholars can build on it, and develop additional insights about social network structure, while complying with all privacy regulations.

    “We’ve released [that] data and made it public, and we’re really happy to be doing that,” Eckles says. “We want to make as much of this as possible open to others. That’s one of the things that I’m hoping is part of the broader impact of the paper.”

    Jahani worked as a contractor at Meta Platforms, which operates Facebook, while conducting the research. Eckles has received past funding from Meta, as well as conference sponsorship, and previously worked there, before joining MIT.   More