More stories

  • in

    MIT Center for Real Estate launches the Asia Real Estate Initiative

    To appreciate the explosive urbanization taking place in Asia, consider this analogy: Every 40 days, a city the equivalent size of Boston is built in Asia. Of the $24.7 trillion real estate investment opportunities predicted by 2030 in emerging cities, $17.8 trillion (72 percent) will be in Asia. While this growth is exciting to the real estate industry, it brings with it the attendant social and environmental issues.

    To promote a sustainable and innovative approach to this growth, leadership at the MIT Center for Real Estate (MIT CRE) recently established the Asia Real Estate Initiative (AREI), which aims to become a platform for industry leaders, entrepreneurs, and the academic community to find solutions to the practical concerns of real estate development across these countries.

    “Behind the creation of this initiative is the understanding that Asia is a living lab for the study of future global urban development,” says Hashim Sarkis, dean of the MIT School of Architecture and Planning.

    An investment in cities of the future

    One of the areas in AREI’s scope of focus is connecting sustainability and technology in real estate.

    “We believe the real estate sector should work cooperatively with the energy, science, and technology sectors to solve the climate challenges,” says Richard Lester, the Institute’s associate provost for international activities. “AREI will engage academics and industry leaders, nongovernment organizations, and civic leaders globally and in Asia, to advance sharing knowledge and research.”

    In its effort to understand how trends and new technologies will impact the future of real estate, AREI has received initial support from a prominent alumnus of MIT CRE who wishes to remain anonymous. The gift will support a cohort of researchers working on innovative technologies applicable to advancing real estate sustainability goals, with a special focus on the global and Asia markets. The call for applications is already under way, with AREI seeking to collaborate with scholars who have backgrounds in economics, finance, urban planning, technology, engineering, and other disciplines.

    “The research on real estate sustainability and technology could transform this industry and help invent global real estate of the future,” says Professor Siqi Zheng, faculty director of MIT CRE and AREI faculty chair. “The pairing of real estate and technology often leads to innovative and differential real estate development strategies such as buildings that are green, smart, and healthy.”

    The initiative arrives at a key time to make a significant impact and cement a leadership role in real estate development across Asia. MIT CRE is positioned to help the industry increase its efficiency and social responsibility, with nearly 40 years of pioneering research in the field. Zheng, an established scholar with expertise on urban growth in fast-urbanizing regions, is the former president of the Asia Real Estate Society and sits on the Board of American Real Estate and Urban Economics Association. Her research has been supported by international institutions including the World Bank, the Asian Development Bank, and the Lincoln Institute of Land Policy.

    “The researchers in AREI are now working on three interrelated themes: the future of real estate and live-work-play dynamics; connecting sustainability and technology in real estate; and innovations in real estate finance and business,” says Zheng.

    The first theme has already yielded a book — “Toward Urban Economic Vibrancy: Patterns and Practices in Asia’s New Cities” — recently published by SA+P Press.

    Engaging thought leaders and global stakeholders

    AREI also plans to collaborate with counterparts in Asia to contribute to research, education, and industry dialogue to meet the challenges of sustainable city-making across the continent and identify areas for innovation. Traditionally, real estate has been a very local business with a lengthy value chain, according to Zhengzhen Tan, director of AREI. Most developers focused their career on one particular product type in one particular regional market. AREI is working to change that dynamic.

    “We want to create a cross-border dialogue within Asia and among Asia, North America, and European leaders to exchange knowledge and practices,” says Tan. “The real estate industry’s learning costs are very high compared to other sectors. Collective learning will reduce the cost of failure and have a significant impact on these global issues.”

    The 2021 United Nations Climate Change Conference in Glasgow shed additional light on environmental commitments being made by governments in Asia. With real estate representing 40 percent of global greenhouse gas emissions, the Asian real estate market is undergoing an urgent transformation to deliver on this commitment.

    “One of the most pressing calls is to get to net-zero emissions for real estate development and operation,” says Tan. “Real estate investors and developers are making short- and long-term choices that are locking in environmental footprints for the ‘decisive decade.’ We hope to inspire developers and investors to think differently and get out of their comfort zone.” More

  • in

    3 Questions: Fotini Christia on racial equity and data science

    Fotini Christia is the Ford International Professor in the Social Sciences in the Department of Political Science, associate director of the Institute for Data, Systems, and Society (IDSS), and director of the Sociotechnical Systems Research Center (SSRC). Her research interests include issues of conflict and cooperation in the Muslim world, and she has conducted fieldwork in Afghanistan, Bosnia, Iran, the Palestinian Territories, Syria, and Yemen. She has co-organized the IDSS Research Initiative on Combatting Systemic Racism (ICSR), which works to bridge the social sciences, data science, and computation by bringing researchers from these disciplines together to address systemic racism across housing, health care, policing, education, employment, and other sectors of society.

    Q: What is the IDSS/ICSR approach to systemic racism research?

    A: The Research Initiative on Combatting Systemic Racism (ICSR) aims to seed and coordinate cross-disciplinary research to identify and overcome racially discriminatory processes and outcomes across a range of U.S. institutions and policy domains.

    Building off the extensive social science literature on systemic racism, the focus of this research initiative is to use big data to develop and harness computational tools that can help effect structural and normative change toward racial equity.

    The initiative aims to create a visible presence at MIT for cutting-edge computational research with a racial equity lens, across societal domains that will attract and train students and scholars.

    The steering committee for this research initiative is composed of underrepresented minority faculty members from across MIT’s five schools and the MIT Schwarzman College of Computing. Members will serve as close advisors to the initiative as well as share the findings of our work beyond MIT’s campus. MIT Chancellor Melissa Nobles heads this committee.

    Q: What role can data science play in helping to effect change toward racial equity?

    A: Existing work has shown racial discrimination in the job market, in the criminal justice system, as well as in education, health care, and access to housing, among other places. It has also underlined how algorithms could further entrench such bias — be it in training data or in the people who build them. Data science tools can not only help identify, but also contribute to, proposing fixes on racially inequitable outcomes that result from implicit or explicit biases in governing institutional practices in the public and private sector, and more recently from the use of AI and algorithmic methods in decision-making.

    To that effect, this initiative will produce research that explores and collects the relevant big data across domains, while paying attention to the ways such data are collected, and focus on improving and developing data-driven computational tools to address racial disparities in structures and institutions that have reproduced racially discriminatory outcomes in American society.

    The strong correlation between race, class, educational attainment, and various attitudes and behaviors in the American context can make it extremely difficult to rule out the influence of confounding factors. Thus, a key motivation for our research initiative is to highlight the importance of causal analysis using computational methods, and focus on understanding the opportunities of big data and algorithmic decision-making to address racial inequities and promote racial justice — beyond de-biasing algorithms. The intent is to also codify methodologies on equity-informed research practices and produce tools that are clear on the quantifiable expected social costs and benefits, as well as on the downstream effects on systemic racism more broadly.

    Q: What are some ways that the ICSR might conduct or follow-up on research seeking real-world impact or policy change?

    A: This type of research has ethical and societal considerations at its core, especially as they pertain to historically disadvantaged groups in the U.S., and will be coordinated with and communicated to local stakeholders to drive relevant policy decisions. This initiative intends to establish connections to URM [underrepresented minority] researchers and students at underrepresented universities and to directly collaborate with them on these research efforts. To that effect, we are leveraging existing programs such as the MIT Summer Research Program (MSRP).

    To ensure that our research targets the right problems bringing a racial equity lens with an interest to effect policy change, we will also connect with community organizations in minority neighborhoods who often bear the brunt of the direct and indirect effects of systemic racism, as well as with local government offices that work to address inequity in service provision in these communities. Our intent is to directly engage IDSS students with these organizations to help develop and test algorithmic tools for racial equity. More

  • in

    Injecting fairness into machine-learning models

    If a machine-learning model is trained using an unbalanced dataset, such as one that contains far more images of people with lighter skin than people with darker skin, there is serious risk the model’s predictions will be unfair when it is deployed in the real world.

    But this is only one part of the problem. MIT researchers have found that machine-learning models that are popular for image recognition tasks actually encode bias when trained on unbalanced data. This bias within the model is impossible to fix later on, even with state-of-the-art fairness-boosting techniques, and even when retraining the model with a balanced dataset.      

    So, the researchers came up with a technique to introduce fairness directly into the model’s internal representation itself. This enables the model to produce fair outputs even if it is trained on unfair data, which is especially important because there are very few well-balanced datasets for machine learning.

    The solution they developed not only leads to models that make more balanced predictions, but also improves their performance on downstream tasks like facial recognition and animal species classification.

    “In machine learning, it is common to blame the data for bias in models. But we don’t always have balanced data. So, we need to come up with methods that actually fix the problem with imbalanced data,” says lead author Natalie Dullerud, a graduate student in the Healthy ML Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

    Dullerud’s co-authors include Kimia Hamidieh, a graduate student in the Healthy ML Group; Karsten Roth, a former visiting researcher who is now a graduate student at the University of Tubingen; Nicolas Papernot, an assistant professor in the University of Toronto’s Department of Electrical Engineering and Computer Science; and senior author Marzyeh Ghassemi, an assistant professor and head of the Healthy ML Group. The research will be presented at the International Conference on Learning Representations.

    Defining fairness

    The machine-learning technique the researchers studied is known as deep metric learning, which is a broad form of representation learning. In deep metric learning, a neural network learns the similarity between objects by mapping similar photos close together and dissimilar photos far apart. During training, this neural network maps images in an “embedding space” where a similarity metric between photos corresponds to the distance between them.

    For example, if a deep metric learning model is being used to classify bird species, it will map photos of golden finches together in one part of the embedding space and cardinals together in another part of the embedding space. Once trained, the model can effectively measure the similarity of new images it hasn’t seen before. It would learn to cluster images of an unseen bird species close together, but farther from cardinals or golden finches within the embedding space.

    The similarity metrics the model learns are very robust, which is why deep metric learning is so often employed for facial recognition, Dullerud says. But she and her colleagues wondered how to determine if a similarity metric is biased.

    “We know that data reflect the biases of processes in society. This means we have to shift our focus to designing methods that are better suited to reality,” says Ghassemi.

    The researchers defined two ways that a similarity metric can be unfair. Using the example of facial recognition, the metric will be unfair if it is more likely to embed individuals with darker-skinned faces closer to each other, even if they are not the same person, than it would if those images were people with lighter-skinned faces. Second, it will be unfair if the features it learns for measuring similarity are better for the majority group than for the minority group.

    The researchers ran a number of experiments on models with unfair similarity metrics and were unable to overcome the bias the model had learned in its embedding space.

    “This is quite scary because it is a very common practice for companies to release these embedding models and then people finetune them for some downstream classification task. But no matter what you do downstream, you simply can’t fix the fairness problems that were induced in the embedding space,” Dullerud says.

    Even if a user retrains the model on a balanced dataset for the downstream task, which is the best-case scenario for fixing the fairness problem, there are still performance gaps of at least 20 percent, she says.

    The only way to solve this problem is to ensure the embedding space is fair to begin with.

    Learning separate metrics

    The researchers’ solution, called Partial Attribute Decorrelation (PARADE), involves training the model to learn a separate similarity metric for a sensitive attribute, like skin tone, and then decorrelating the skin tone similarity metric from the targeted similarity metric. If the model is learning the similarity metrics of different human faces, it will learn to map similar faces close together and dissimilar faces far apart using features other than skin tone.

    Any number of sensitive attributes can be decorrelated from the targeted similarity metric in this way. And because the similarity metric for the sensitive attribute is learned in a separate embedding space, it is discarded after training so only the targeted similarity metric remains in the model.

    Their method is applicable to many situations because the user can control the amount of decorrelation between similarity metrics. For instance, if the model will be diagnosing breast cancer from mammogram images, a clinician likely wants some information about biological sex to remain in the final embedding space because it is much more likely that women will have breast cancer than men, Dullerud explains.

    They tested their method on two tasks, facial recognition and classifying bird species, and found that it reduced performance gaps caused by bias, both in the embedding space and in the downstream task, regardless of the dataset they used.

    Moving forward, Dullerud is interested in studying how to force a deep metric learning model to learn good features in the first place.

    “How do you properly audit fairness? That is an open question right now. How can you tell that a model is going to be fair, or that it is only going to be fair in certain situations, and what are those situations? Those are questions I am really interested in moving forward,” she says. More

  • in

    Using artificial intelligence to find anomalies hiding in massive datasets

    Identifying a malfunction in the nation’s power grid can be like trying to find a needle in an enormous haystack. Hundreds of thousands of interrelated sensors spread across the U.S. capture data on electric current, voltage, and other critical information in real time, often taking multiple recordings per second.

    Researchers at the MIT-IBM Watson AI Lab have devised a computationally efficient method that can automatically pinpoint anomalies in those data streams in real time. They demonstrated that their artificial intelligence method, which learns to model the interconnectedness of the power grid, is much better at detecting these glitches than some other popular techniques.

    Because the machine-learning model they developed does not require annotated data on power grid anomalies for training, it would be easier to apply in real-world situations where high-quality, labeled datasets are often hard to come by. The model is also flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems. It could, for example, identify traffic bottlenecks or reveal how traffic jams cascade.

    “In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise. We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques,” says senior author Jie Chen, a research staff member and manager of the MIT-IBM Watson AI Lab.

    The co-author is Enyan Dai, an MIT-IBM Watson AI Lab intern and graduate student at the Pennsylvania State University. This research will be presented at the International Conference on Learning Representations.

    Probing probabilities

    The researchers began by defining an anomaly as an event that has a low probability of occurring, like a sudden spike in voltage. They treat the power grid data as a probability distribution, so if they can estimate the probability densities, they can identify the low-density values in the dataset. Those data points which are least likely to occur correspond to anomalies.

    Estimating those probabilities is no easy task, especially since each sample captures multiple time series, and each time series is a set of multidimensional data points recorded over time. Plus, the sensors that capture all that data are conditional on one another, meaning they are connected in a certain configuration and one sensor can sometimes impact others.

    To learn the complex conditional probability distribution of the data, the researchers used a special type of deep-learning model called a normalizing flow, which is particularly effective at estimating the probability density of a sample.

    They augmented that normalizing flow model using a type of graph, known as a Bayesian network, which can learn the complex, causal relationship structure between different sensors. This graph structure enables the researchers to see patterns in the data and estimate anomalies more accurately, Chen explains.

    “The sensors are interacting with each other, and they have causal relationships and depend on each other. So, we have to be able to inject this dependency information into the way that we compute the probabilities,” he says.

    This Bayesian network factorizes, or breaks down, the joint probability of the multiple time series data into less complex, conditional probabilities that are much easier to parameterize, learn, and evaluate. This allows the researchers to estimate the likelihood of observing certain sensor readings, and to identify those readings that have a low probability of occurring, meaning they are anomalies.

    Their method is especially powerful because this complex graph structure does not need to be defined in advance — the model can learn the graph on its own, in an unsupervised manner.

    A powerful technique

    They tested this framework by seeing how well it could identify anomalies in power grid data, traffic data, and water system data. The datasets they used for testing contained anomalies that had been identified by humans, so the researchers were able to compare the anomalies their model identified with real glitches in each system.

    Their model outperformed all the baselines by detecting a higher percentage of true anomalies in each dataset.

    “For the baselines, a lot of them don’t incorporate graph structure. That perfectly corroborates our hypothesis. Figuring out the dependency relationships between the different nodes in the graph is definitely helping us,” Chen says.

    Their methodology is also flexible. Armed with a large, unlabeled dataset, they can tune the model to make effective anomaly predictions in other situations, like traffic patterns.

    Once the model is deployed, it would continue to learn from a steady stream of new sensor data, adapting to possible drift of the data distribution and maintaining accuracy over time, says Chen.

    Though this particular project is close to its end, he looks forward to applying the lessons he learned to other areas of deep-learning research, particularly on graphs.

    Chen and his colleagues could use this approach to develop models that map other complex, conditional relationships. They also want to explore how they can efficiently learn these models when the graphs become enormous, perhaps with millions or billions of interconnected nodes. And rather than finding anomalies, they could also use this approach to improve the accuracy of forecasts based on datasets or streamline other classification techniques.

    This work was funded by the MIT-IBM Watson AI Lab and the U.S. Department of Energy. More

  • in

    Deep-learning technique predicts clinical treatment outcomes

    When it comes to treatment strategies for critically ill patients, clinicians want to be able to consider all their options and timing of administration, and make the optimal decision for their patients. While clinician experience and study has helped them to be successful in this effort, not all patients are the same, and treatment decisions at this crucial time could mean the difference between patient improvement and quick deterioration. Therefore, it would be helpful for doctors to be able to take a patient’s previous known health status and received treatments and use that to predict that patient’s health outcome under different treatment scenarios, in order to pick the best path.

    Now, a deep-learning technique, called G-Net, from researchers at MIT and IBM provides a window into causal counterfactual prediction, affording physicians the opportunity to explore how a patient might fare under different treatment plans. The foundation of G-Net is the g-computation algorithm, a causal inference method that estimates the effect of dynamic exposures in the presence of measured confounding variables — ones that may influence both treatments and outcomes. Unlike previous implementations of the g-computation framework, which have used linear modeling approaches, G-Net uses recurrent neural networks (RNN), which have node connections that allow them to better model temporal sequences with complex and nonlinear dynamics, like those found in the physiological and clinical time series data. In this way, physicians can develop alternative plans based on patient history and test them before making a decision.

    “Our ultimate goal is to develop a machine learning technique that would allow doctors to explore various ‘What if’ scenarios and treatment options,” says Li-wei Lehman, MIT research scientist in the MIT Institute for Medical Engineering and Science and an MIT-IBM Watson AI Lab project lead. “A lot of work has been done in terms of deep learning for counterfactual prediction but [it’s] been focusing on a point exposure setting,” or a static, time-varying treatment strategy, which doesn’t allow for adjustment of treatments as patient history changes. However, her team’s new prediction approach provides for treatment plan flexibility and chances for treatment alteration over time as patient covariate history and past treatments change. “G-Net is the first deep-learning approach based on g-computation that can predict both the population-level and individual-level treatment effects under dynamic and time varying treatment strategies.”

    The research, which was recently published in the Proceedings of Machine Learning Research, was co-authored by Rui Li MEng ’20, Stephanie Hu MEng ’21, former MIT postdoc Mingyu Lu MD, graduate student Yuria Utsumi, IBM research staff member Prithwish Chakraborty, IBM Research director of Hybrid Cloud Services Daby Sow, IBM data scientist Piyush Madan, IBM research scientist Mohamed Ghalwash, and IBM research scientist Zach Shahn.

    Tracking disease progression

    To build, validate, and test G-Net’s predictive abilities, the researchers considered the circulatory system in septic patients in the ICU. During critical care, doctors need to make trade-offs and judgement calls, such as ensuring the organs are receiving adequate blood supply without overworking the heart. For this, they could give intravenous fluids to patients to increase blood pressure; however, too much can cause edema. Alternatively, physicians can administer vasopressors, which act to contract blood vessels and raise blood pressure.

    In order to mimic this and demonstrate G-Net’s proof-of-concept, the team used CVSim, a mechanistic model of a human cardiovascular system that’s governed by 28 input variables characterizing the system’s current state, such as arterial pressure, central venous pressure, total blood volume, and total peripheral resistance, and modified it to simulate various disease processes (e.g., sepsis or blood loss) and effects of interventions (e.g., fluids and vasopressors). The researchers used CVSim to generate observational patient data for training and for “ground truth” comparison against counterfactual prediction. In their G-Net architecture, the researchers ran two RNNs to handle and predict variables that are continuous, meaning they can take on a range of values, like blood pressure, and categorical variables, which have discrete values, like the presence or absence of pulmonary edema. The researchers simulated the health trajectories of thousands of “patients” exhibiting symptoms under one treatment regime, let’s say A, for 66 timesteps, and used them to train and validate their model.

    Testing G-Net’s prediction capability, the team generated two counterfactual datasets. Each contained roughly 1,000 known patient health trajectories, which were created from CVSim using the same “patient” condition as the starting point under treatment A. Then at timestep 33, treatment changed to plan B or C, depending on the dataset. The team then performed 100 prediction trajectories for each of these 1,000 patients, whose treatment and medical history was known up until timestep 33 when a new treatment was administered. In these cases, the prediction agreed well with the “ground-truth” observations for individual patients and averaged population-level trajectories.

    A cut above the rest

    Since the g-computation framework is flexible, the researchers wanted to examine G-Net’s prediction using different nonlinear models — in this case, long short-term memory (LSTM) models, which are a type of RNN that can learn from previous data patterns or sequences — against the more classical linear models and a multilayer perception model (MLP), a type of neural network that can make predictions using a nonlinear approach. Following a similar setup as before, the team found that the error between the known and predicted cases was smallest in the LSTM models compared to the others. Since G-Net is able to model the temporal patterns of the patient’s ICU history and past treatment, whereas a linear model and MLP cannot, it was better able to predict the patient’s outcome.

    The team also compared G-Net’s prediction in a static, time-varying treatment setting against two state-of-the-art deep-learning based counterfactual prediction approaches, a recurrent marginal structural network (rMSN) and a counterfactual recurrent neural network (CRN), as well as a linear model and an MLP. For this, they investigated a model for tumor growth under no treatment, radiation, chemotherapy, and both radiation and chemotherapy scenarios. “Imagine a scenario where there’s a patient with cancer, and an example of a static regime would be if you only give a fixed dosage of chemotherapy, radiation, or any kind of drug, and wait until the end of your trajectory,” comments Lu. For these investigations, the researchers generated simulated observational data using tumor volume as the primary influence dictating treatment plans and demonstrated that G-Net outperformed the other models. One potential reason could be because g-computation is known to be more statistically efficient than rMSN and CRN, when models are correctly specified.

    While G-Net has done well with simulated data, more needs to be done before it can be applied to real patients. Since neural networks can be thought of as “black boxes” for prediction results, the researchers are beginning to investigate the uncertainty in the model to help ensure safety. In contrast to these approaches that recommend an “optimal” treatment plan without any clinician involvement, “as a decision support tool, I believe that G-Net would be more interpretable, since the clinicians would input treatment strategies themselves,” says Lehman, and “G-Net will allow them to be able to explore different hypotheses.” Further, the team has moved on to using real data from ICU patients with sepsis, bringing it one step closer to implementation in hospitals.

    “I think it is pretty important and exciting for real-world applications,” says Hu. “It’d be helpful to have some way to predict whether or not a treatment might work or what the effects might be — a quicker iteration process for developing these hypotheses for what to try, before actually trying to implement them in in a years-long, potentially very involved and very invasive type of clinical trial.”

    This research was funded by the MIT-IBM Watson AI Lab. More

  • in

    Can machine-learning models overcome biased datasets?

    Artificial intelligence systems may be able to complete tasks quickly, but that doesn’t mean they always do so fairly. If the datasets used to train machine-learning models contain biased data, it is likely the system could exhibit that same bias when it makes decisions in practice.

    For instance, if a dataset contains mostly images of white men, then a facial-recognition model trained with these data may be less accurate for women or people with different skin tones.

    A group of researchers at MIT, in collaboration with researchers at Harvard University and Fujitsu Ltd., sought to understand when and how a machine-learning model is capable of overcoming this kind of dataset bias. They used an approach from neuroscience to study how training data affects whether an artificial neural network can learn to recognize objects it has not seen before. A neural network is a machine-learning model that mimics the human brain in the way it contains layers of interconnected nodes, or “neurons,” that process data.

    The new results show that diversity in training data has a major influence on whether a neural network is able to overcome bias, but at the same time dataset diversity can degrade the network’s performance. They also show that how a neural network is trained, and the specific types of neurons that emerge during the training process, can play a major role in whether it is able to overcome a biased dataset.

    “A neural network can overcome dataset bias, which is encouraging. But the main takeaway here is that we need to take into account data diversity. We need to stop thinking that if you just collect a ton of raw data, that is going to get you somewhere. We need to be very careful about how we design datasets in the first place,” says Xavier Boix, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and the Center for Brains, Minds, and Machines (CBMM), and senior author of the paper.  

    Co-authors include former MIT graduate students Timothy Henry, Jamell Dozier, Helen Ho, Nishchal Bhandari, and Spandan Madan, a corresponding author who is currently pursuing a PhD at Harvard; Tomotake Sasaki, a former visiting scientist now a senior researcher at Fujitsu Research; Frédo Durand, a professor of electrical engineering and computer science at MIT and a member of the Computer Science and Artificial Intelligence Laboratory; and Hanspeter Pfister, the An Wang Professor of Computer Science at the Harvard School of Enginering and Applied Sciences. The research appears today in Nature Machine Intelligence.

    Thinking like a neuroscientist

    Boix and his colleagues approached the problem of dataset bias by thinking like neuroscientists. In neuroscience, Boix explains, it is common to use controlled datasets in experiments, meaning a dataset in which the researchers know as much as possible about the information it contains.

    The team built datasets that contained images of different objects in varied poses, and carefully controlled the combinations so some datasets had more diversity than others. In this case, a dataset had less diversity if it contains more images that show objects from only one viewpoint. A more diverse dataset had more images showing objects from multiple viewpoints. Each dataset contained the same number of images.

    The researchers used these carefully constructed datasets to train a neural network for image classification, and then studied how well it was able to identify objects from viewpoints the network did not see during training (known as an out-of-distribution combination). 

    For example, if researchers are training a model to classify cars in images, they want the model to learn what different cars look like. But if every Ford Thunderbird in the training dataset is shown from the front, when the trained model is given an image of a Ford Thunderbird shot from the side, it may misclassify it, even if it was trained on millions of car photos.

    The researchers found that if the dataset is more diverse — if more images show objects from different viewpoints — the network is better able to generalize to new images or viewpoints. Data diversity is key to overcoming bias, Boix says.

    “But it is not like more data diversity is always better; there is a tension here. When the neural network gets better at recognizing new things it hasn’t seen, then it will become harder for it to recognize things it has already seen,” he says.

    Testing training methods

    The researchers also studied methods for training the neural network.

    In machine learning, it is common to train a network to perform multiple tasks at the same time. The idea is that if a relationship exists between the tasks, the network will learn to perform each one better if it learns them together.

    But the researchers found the opposite to be true — a model trained separately for each task was able to overcome bias far better than a model trained for both tasks together.

    “The results were really striking. In fact, the first time we did this experiment, we thought it was a bug. It took us several weeks to realize it was a real result because it was so unexpected,” he says.

    They dove deeper inside the neural networks to understand why this occurs.

    They found that neuron specialization seems to play a major role. When the neural network is trained to recognize objects in images, it appears that two types of neurons emerge — one that specializes in recognizing the object category and another that specializes in recognizing the viewpoint.

    When the network is trained to perform tasks separately, those specialized neurons are more prominent, Boix explains. But if a network is trained to do both tasks simultaneously, some neurons become diluted and don’t specialize for one task. These unspecialized neurons are more likely to get confused, he says.

    “But the next question now is, how did these neurons get there? You train the neural network and they emerge from the learning process. No one told the network to include these types of neurons in its architecture. That is the fascinating thing,” he says.

    That is one area the researchers hope to explore with future work. They want to see if they can force a neural network to develop neurons with this specialization. They also want to apply their approach to more complex tasks, such as objects with complicated textures or varied illuminations.

    Boix is encouraged that a neural network can learn to overcome bias, and he is hopeful their work can inspire others to be more thoughtful about the datasets they are using in AI applications.

    This work was supported, in part, by the National Science Foundation, a Google Faculty Research Award, the Toyota Research Institute, the Center for Brains, Minds, and Machines, Fujitsu Research, and the MIT-Sensetime Alliance on Artificial Intelligence. More

  • in

    Research aims to mitigate chemical and biological airborne threats

    When the air harbors harmful matter, such as a virus or toxic chemical, it’s not always easy to promptly detect this danger. Whether spread maliciously or accidentally, how fast and how far could hazardous plumes travel through a city? What could emergency managers do in response?

    These were questions that scientists, public health officials, and government agencies probed with an air flow study conducted recently in New York City. At 120 locations across all five boroughs of the city, a team led by MIT Lincoln Laboratory collected safe test particles and gases released earlier in subway stations and on streets, tracking their journeys. The exercise measured how far the materials traveled and what their concentrations were when detected.

    The results are expected to improve air dispersion models, and in turn, help emergency planners improve response protocols if a real chemical or biological event were to take place. 

    The study was performed under the Department of Homeland Security (DHS) Science and Technology Directorate’s (S&T) Urban Threat Dispersion Project. The project is largely driven by Lincoln Laboratory’s Counter–Weapons of Mass Destruction (CWMD) Systems Group to improve homeland defenses against airborne threats. This exercise followed a similar, though much smaller, study in 2016 that focused mainly on the subway system within Manhattan.

    “The idea was to look at how particles and gases move through urban environments, starting with a focus on subways,” says Mandeep Virdi, a researcher in the CWMD Systems Group who helped lead both studies.

    The particles and gases used in the study are safe to disperse. The particulates are primarily composed of maltodextrin sugar, and have been used in prior public safety exercises. To enable researchers to track the particles, the particles are modified with small amounts of synthetic DNA that acts as a unique “barcode.” This barcode corresponds to the location from which the particle was released and the day of release. When these particles are later collected and analyzed, researchers can know exactly where they came from.

    The laboratory’s team led the process of releasing the particles and collecting the particle samples for analysis. A small sprayer is used to aerosolize the particles into the air. As the particles flow throughout the city, some get trapped in filters set up at the many dispersed collection sites. 

    To make processes more efficient for this large study, the team built special filter heads that rotated through multiple filters, saving time spent revisiting a collection site. They also developed a system using NFC (near-field communication) tags to simplify the cataloging and tracking of samples and equipment through a mobile app. 

    The researchers are still processing the approximately 5,000 samples that were collected over the five-day measurement campaign. The data will feed into existing particle dispersion models to improve simulations. One of these models, from Argonne National Laboratory, focuses on subway environments, and another model from Los Alamos National Laboratory simulates above-ground city environments, taking into account buildings and urban canyon air flows.

    Together, these models can show how a plume would travel from the subway to the streets, for example. These insights will enable emergency managers in New York City to develop more informed response strategies, as they did following the 2016 subway study.

    “The big question has always been, if there is a release and law enforcement can detect it in time, what do you actually do? Do you shut down the subway system? What can you do to mitigate those effects? Knowing that is the end goal,” Virdi says. 

    A new program, called the Chemical and Biological Defense Testbed, has just kicked off to further investigate those questions. Trina Vian at Lincoln Laboratory is leading this program, also under S&T funding.

    “Now that we’ve learned more about how material transports through the subway system, this test bed is looking at ways that we can mitigate that transport in a low-regret way,” Vian says.

    According to Vian, emergency managers don’t have many options other than to evacuate the area when a biological or chemical sensor is triggered. Yet current sensors tend to have high false-alarm rates, particularly in dirty environments. “You really can’t afford to make that evacuation call in error. Not only do you undermine people’s trust in the system, but also people can become injured, and it may actually be a non-threatening situation.”

    The goal of this test bed is to develop architectures and technologies that could allow for a range of appropriate response activities. For example, the team will be looking at ways through which air flow could be constrained or filtered in place, without disrupting traffic, while responders validate an alarm. They’ll also be testing the performance of new chemical and biological sensor technologies.

    Both Vian and Virdi stress the importance of collaboration for carrying out these large-scale studies, and in tackling the problem of airborne dangers in general. The test bed program is already benefiting by using equipment provided through the CWMD Alliance, a partnership of DHS and the Joint Program Executive Office for Chemical, Biological, Radiological and Nuclear Defense.

    A team of nearly 175 personnel worked together on the air flow exercise, spanning the Metropolitan Transportation Authority, New York City Transit, New York City Police Department, Port Authority of New York and New Jersey, New Jersey Transit, New York City Department of Environmental Protection, the New York City Department of Health and Mental Hygiene, the National Guard Weapons of Mass Destruction Civil Support Teams, the Environmental Protection Agency, and Department of Energy National Laboratories, in addition to S&T and Lincoln Laboratory.

    “It really was all about teamwork,” Virdi reflects. “Programs like this are why I came to Lincoln Laboratory. Seeing how the science is applied in a way that has real actionable results and how appreciative agencies are of what we’re doing has been rewarding. It’s exciting to see your program through, especially one as intense as this.” More

  • in

    Probing how proteins pair up inside cells

    Despite its minute size, a single cell contains billions of molecules that bustle around and bind to one another, carrying out vital functions. The human genome encodes about 20,000 proteins, most of which interact with partner proteins to mediate upwards of 400,000 distinct interactions. These partners don’t just latch onto one another haphazardly; they only bind to very specific companions that they must recognize inside the crowded cell. If they create the wrong pairings — or even the right pairings at the wrong place or wrong time — cancer or other diseases can ensue. Scientists are hard at work investigating these protein-protein relationships, in order to understand how they work, and potentially create drugs that disrupt or mimic them to treat disease.

    The average human protein is composed of approximately 400 building blocks called amino acids, which are strung together and folded into a complex 3D structure. Within this long string of building blocks, some proteins contain stretches of four to six amino acids called short linear motifs (SLiMs), which mediate protein-protein interactions. Despite their simplicity and small size, SLiMs and their binding partners facilitate key cellular processes. However, it’s been historically difficult to devise experiments to probe how SLiMs recognize their specific binding partners.

    To address this problem, a group led by Theresa Hwang PhD ’21 designed a screening method to understand how SLiMs selectively bind to certain proteins, and even distinguish between those with similar structures. Using the detailed information they gleaned from studying these interactions, the researchers created their own synthetic molecule capable of binding extremely tightly to a protein called ENAH, which is implicated in cancer metastasis. The team shared their findings in a pair of eLife studies, one published on Dec. 2, 2021, and the other published Jan. 25.

    “The ability to test hundreds of thousands of potential SLiMs for binding provides a powerful tool to explore why proteins prefer specific SLiM partners over others,” says Amy Keating, professor of biology and biological engineering and the senior author on both studies. “As we gain an understanding of the tricks that a protein uses to select its partners, we can apply these in protein design to make our own binders to modulate protein function for research or therapeutic purposes.”

    Most existing screens for SLiMs simply select for short, tight binders, while neglecting SLiMs that don’t grip their partner proteins quite as strongly. To survey SLiMs with a wide range of binding affinities, Keating, Hwang, and their colleagues developed their own screen called MassTitr.

    The researchers also suspected that the amino acids on either side of the SLiM’s core four-to-six amino acid sequence might play an underappreciated role in binding. To test their theory, they used MassTitr to screen the human proteome in longer chunks comprised of 36 amino acids, in order to see which “extended” SLiMs would associate with the protein ENAH.

    ENAH, sometimes referred to as Mena, helps cells to move. This ability to migrate is critical for healthy cells, but cancer cells can co-opt it to spread. Scientists have found that reducing the amount of ENAH decreases the cancer cell’s ability to invade other tissues — suggesting that formulating drugs to disrupt this protein and its interactions could treat cancer.

    Thanks to MassTitr, the team identified 33 SLiM-containing proteins that bound to ENAH — 19 of which are potentially novel binding partners. They also discovered three distinct patterns of amino acids flanking core SLiM sequences that helped the SLiMs bind even tighter to ENAH. Of these extended SLiMs, one found in a protein called PCARE bound to ENAH with the highest known affinity of any SLiM to date.

    Next, the researchers combined a computer program called dTERMen with X-ray crystallography in order understand how and why PCARE binds to ENAH over ENAH’s two nearly identical sister proteins (VASP and EVL). Hwang and her colleagues saw that the amino acids flanking PCARE’s core SliM caused ENAH to change shape slightly when the two made contact, allowing the binding sites to latch onto one another. VASP and EVL, by contrast, could not undergo this structural change, so the PCARE SliM did not bind to either of them as tightly.

    Inspired by this unique interaction, Hwang designed her own protein that bound to ENAH with unprecedented affinity and specificity. “It was exciting that we were able to come up with such a specific binder,” she says. “This work lays the foundation for designing synthetic molecules with the potential to disrupt protein-protein interactions that cause disease — or to help scientists learn more about ENAH and other SLiM-binding proteins.”  

    Ylva Ivarsson, a professor of biochemistry at Uppsala University who was not involved with the study, says that understanding how proteins find their binding partners is a question of fundamental importance to cell function and regulation. The two eLife studies, she explains, show that extended SLiMs play an underappreciated role in determining the affinity and specificity of these binding interactions.

    “The studies shed light on the idea that context matters, and provide a screening strategy for a variety of context-dependent binding interactions,” she says. “Hwang and co-authors have created valuable tools for dissecting the cellular function of proteins and their binding partners. Their approach could even inspire ENAH-specific inhibitors for therapeutic purposes.”

    Hwang’s biggest takeaway from the project is that things are not always as they seem: even short, simple protein segments can play complex roles in the cell. As she puts it: “We should really appreciate SLiMs more.” More