More stories

  • in

    Differences in T cells’ functional states determine resistance to cancer therapy

    Non-small cell lung cancer (NSCLC) is the most common type of lung cancer in humans. Some patients with NSCLC receive a therapy called immune checkpoint blockade (ICB) that helps kill cancer cells by reinvigorating a subset of immune cells called T cells, which are “exhausted” and have stopped working. However, only about 35 percent of NSCLC patients respond to ICB therapy. Stefani Spranger’s lab at the MIT Department of Biology explores the mechanisms behind this resistance, with the goal of inspiring new therapies to better treat NSCLC patients. In a new study published on Oct. 29 in Science Immunology, a team led by Spranger lab postdoc Brendan Horton revealed what causes T cells to be non-responsive to ICB — and suggests a possible solution.

    Scientists have long thought that the conditions within a tumor were responsible for determining when T cells stop working and become exhausted after being overstimulated or working for too long to fight a tumor. That’s why physicians prescribe ICB to treat cancer — ICB can invigorate the exhausted T cells within a tumor. However, Horton’s new experiments show that some ICB-resistant T cells stop working before they even enter the tumor. These T cells are not actually exhausted, but rather they become dysfunctional due to changes in gene expression that arise early during the activation of a T cell, which occurs in lymph nodes. Once activated, T cells differentiate into certain functional states, which are distinguishable by their unique gene expression patterns.

    The notion that the dysfunctional state that leads to ICB resistance arises before T cells enter the tumor is quite novel, says Spranger, the Howard S. and Linda B. Stern Career Development Professor, a member of the Koch Institute for Integrative Cancer Research, and the study’s senior author.

    “We show that this state is actually a preset condition, and that the T cells are already non-responsive to therapy before they enter the tumor,” she says. As a result, she explains, ICB therapies that work by reinvigorating exhausted T cells within the tumor are less likely to be effective. This suggests that combining ICB with other forms of immunotherapy that target T cells differently might be a more effective approach to help the immune system combat this subset of lung cancer.

    In order to determine why some tumors are resistant to ICB, Horton and the research team studied T cells in murine models of NSCLC. The researchers sequenced messenger RNA from the responsive and non-responsive T cells in order to identify any differences between the T cells. Supported in part by the Koch Institute Frontier Research Program, they used a technique called Seq-Well, developed in the lab of fellow Koch Institute member J. Christopher Love, the Raymond A. (1921) and Helen E. St. Laurent Professor of Chemical Engineering and a co-author of the study. The technique allows for the rapid gene expression profiling of single cells, which permitted Spranger and Horton to get a very granular look at the gene expression patterns of the T cells they were studying.

    Seq-Well revealed distinct patterns of gene expression between the responsive and non-responsive T cells. These differences, which are determined when the T cells assume their specialized functional states, may be the underlying cause of ICB resistance.

    Now that Horton and his colleagues had a possible explanation for why some T cells did not respond to ICB, they decided to see if they could help the ICB-resistant T cells kill the tumor cells. When analyzing the gene expression patterns of the non-responsive T cells, the researchers had noticed that these T cells had a lower expression of receptors for certain cytokines, small proteins that control immune system activity. To counteract this, the researchers treated lung tumors in murine models with extra cytokines. As a result, the previously non-responsive T cells were then able to fight the tumors — meaning that the cytokine therapy prevented, and potentially even reversed, the dysfunctionality.

    Administering cytokine therapy to human patients is not currently safe, because cytokines can cause serious side effects as well as a reaction called a “cytokine storm,” which can produce severe fevers, inflammation, fatigue, and nausea. However, there are ongoing efforts to figure out how to safely administer cytokines to specific tumors. In the future, Spranger and Horton suspect that cytokine therapy could be used in combination with ICB.

    “This is potentially something that could be translated into a therapeutic that could increase the therapy response rate in non-small cell lung cancer,” Horton says.

    Spranger agrees that this work will help researchers develop more innovative cancer therapies, especially because researchers have historically focused on T cell exhaustion rather than the earlier role that T cell functional states might play in cancer.

    “If T cells are rendered dysfunctional early on, ICB is not going to be effective, and we need to think outside the box,” she says. “There’s more evidence, and other labs are now showing this as well, that the functional state of the T cell actually matters quite substantially in cancer therapies.” To Spranger, this means that cytokine therapy “might be a therapeutic avenue” for NSCLC patients beyond ICB.

    Jeffrey Bluestone, the A.W. and Mary Margaret Clausen Distinguished Professor of Metabolism and Endocrinology at the University of California-San Francisco, who was not involved with the paper, agrees. “The study provides a potential opportunity to ‘rescue’ immunity in the NSCLC non-responder patients with appropriate combination therapies,” he says.

    This research was funded by the Pew-Stewart Scholars for Cancer Research, the Ludwig Center for Molecular Oncology, the Koch Institute Frontier Research Program through the Kathy and Curt Mable Cancer Research Fund, and the National Cancer Institute. More

  • in

    MIT welcomes nine MLK Visiting Professors and Scholars for 2021-22

    In its 31st year, the Martin Luther King Jr. (MLK) Visiting Professors and Scholars Program will host nine outstanding scholars from across the Americas. The flagship program honors the life and legacy of Martin Luther King Jr. by increasing the presence and recognizing the contributions of underrepresented minority scholars at MIT. Throughout the year, the cohort will enhance their scholarship through intellectual engagement with the MIT community and enrich the cultural, academic, and professional experience of students.

    The 2021-22 scholars

    Sanford Biggers is an interdisciplinary artist hosted by the Department of Architecture. His work is an interplay of narrative, perspective, and history that speaks to current social, political, and economic happenings while examining their contexts. His diverse practice positions him as a collaborator with the past through explorations of often-overlooked cultural and political narratives from American history. Through collaboration with his faculty host, Brandon Clifford, he will spend the year contributing to projects with Architecture; Art, Culture and Technology; the Transmedia Storytelling initiatives; and community workshops and engagement with local K-12 education.

    Kristen Dorsey is an assistant professor of engineering at Smith College. She will be hosted by the Program in Media Arts and Sciences at the MIT Media Lab. Her research focuses on the fabrication and characterization of microscale sensors and microelectromechanical systems. Dorsey tries to understand “why things go wrong” by investigating device reliability and stability. At MIT, Dorsey is interested in forging collaborations to consider issues of access and equity as they apply to wearable health care devices.

    Omolola “Lola” Eniola-Adefeso is the associate dean for graduate and professional education and associate professor of chemical engineering at the University of Michigan. She will join MIT’s Department of Chemical Engineering (ChemE). Eniola-Adefeso will work with Professor Paula Hammond on developing electrostatically assembled nanoparticle coatings that enable targeting of specific immune cell types. A co-founder and chief scientific officer of Asalyxa Bio, she is interested in the interactions between blood leukocytes and endothelial cells in vessel lumen lining, and how they change during inflammation response. Eniola-Adefeso will also work with the Diversity in Chemical Engineering (DICE) graduate student group in ChemE and the National Organization of Black Chemists and Chemical Engineers.

    Robert Gilliard Jr. is an assistant professor of chemistry at the University of Virginia and will join the MIT chemistry department, working closely with faculty host Christopher Cummins. His research focuses on various aspects of group 15 element chemistry. He was a founding member of the National Organization of Black Chemists and Chemical Engineers UGA section, and he has served as an American Chemical Society (ACS) Bridge Program mentor as well as an ACS Project Seed mentor. Gilliard has also collaborated with the Cleveland Public Library to expose diverse young scholars to STEM fields.

    Valencia Joyner Koomson ’98, MNG ’99 will return for the second semester of her appointment this fall in MIT’s Department of Electrical Engineering and Computer Science. Based at Tufts University, where she is an associate professor in the Department of Electrical and Computer Engineering, Koomson has focused her research on microelectronic systems for cell analysis and biomedical applications. In the past semester, she has served as a judge for the Black Alumni/ae of MIT Research Slam and worked closely with faculty host Professor Akintunde Akinwande.

    Luis Gilberto Murillo-Urrutia will continue his appointment in MIT’s Environmental Solutions Initiative. He has 30 years of experience in public policy design, implementation, and advocacy, most notably in the areas of sustainable regional development, environmental protection and management of natural resources, social inclusion, and peace building. At MIT, he has continued his research on environmental justice, with a focus on carbon policy and its impacts on Afro-descendant communities in Colombia.

    Sonya T. Smith was the first female professor of mechanical engineering at Howard University. She will join the Department of Aeronautics and Astronautics at MIT. Her research involves computational fluid dynamics and thermal management of electronics for air and space vehicles. She is looking forward to serving as a mentor to underrepresented students across MIT and fostering new research collaborations with her home lab at Howard.

    Lawrence Udeigwe is an associate professor of mathematics at Manhattan College and will join MIT’s Department of Brain and Cognitive Sciences. He plans to co-teach a graduate seminar course with Professor James DiCarlo to explore practical and philosophical questions regarding the use of simulations to build theories in neuroscience. Udeigwe also leads the Lorens Chuno group; as a singer-songwriter, his work tackles intersectionality issues faced by contemporary Africans.

    S. Craig Watkins is an internationally recognized expert in media and a professor at the University of Texas at Austin. He will join MIT’s Institute for Data, Systems, and Society to assist in researching the role of big data in enabling deep structural changes with regard to systemic racism. He will continue to expand on his work as founding director of the Institute for Media Innovation at the University of Texas at Austin, exploring the intersections of critical AI studies, critical race studies, and design. He will also work with MIT’s Center for Advanced Virtuality to develop computational systems that support social perspective-taking.

    Community engagement

    Throughout the 2021-22 academic year, MLK professors and scholars will be presenting their research at a monthly speaker series. Events will be held in an in-person/Zoom hybrid environment. All members of the MIT community are encouraged to attend and hear directly from this year’s cohort of outstanding scholars. To hear more about upcoming events, subscribe to their mailing list.

    On Sept. 15, all are invited to join the Institute Community and Equity Office in welcoming the scholars to campus by attending a welcome luncheon. More

  • in

    Using adversarial attacks to refine molecular energy predictions

    Neural networks (NNs) are increasingly being used to predict new materials, the rate and yield of chemical reactions, and drug-target interactions, among others. For these applications, they are orders of magnitude faster than traditional methods such as quantum mechanical simulations. 

    The price for this agility, however, is reliability. Because machine learning models only interpolate, they may fail when used outside the domain of training data.

    But the part that worried Rafael Gómez-Bombarelli, the Jeffrey Cheah Career Development Professor in the MIT Department of Materials Science and Engineering, and graduate students Daniel Schwalbe-Koda and Aik Rui Tan was that establishing the limits of these machine learning (ML) models is tedious and labor-intensive. 

    This is particularly true for predicting ‘‘potential energy surfaces” (PES), or the map of a molecule’s energy in all its configurations. These surfaces encode the complexities of a molecule into flatlands, valleys, peaks, troughs, and ravines. The most stable configurations of a system are usually in the deep pits — quantum mechanical chasms from which atoms and molecules typically do not escape. 

    In a recent Nature Communications paper, the research team presented a way to demarcate the “safe zone” of a neural network by using “adversarial attacks.” Adversarial attacks have been studied for other classes of problems, such as image classification, but this is the first time that they are being used to sample molecular geometries in a PES. 

    “People have been using uncertainty for active learning for years in ML potentials. The key difference is that they need to run the full ML simulation and evaluate if the NN was reliable, and if it wasn’t, acquire more data, retrain and re-simulate. Meaning that it takes a long time to nail down the right model, and one has to run the ML simulation many times” explains Gómez-Bombarelli.

    The Gómez-Bombarelli lab at MIT works on a synergistic synthesis of first-principles simulation and machine learning that greatly speeds up this process. The actual simulations are run only for a small fraction of these molecules, and all those data are fed into a neural network that learns how to predict the same properties for the rest of the molecules. They have successfully demonstrated these methods for a growing class of novel materials that includes catalysts for producing hydrogen from water, cheaper polymer electrolytes for electric vehicles,  zeolites for molecular sieving, magnetic materials, and more. 

    The challenge, however, is that these neural networks are only as smart as the data they are trained on.  Considering the PES map, 99 percent of the data may fall into one pit, totally missing valleys that are of more interest. 

    Such wrong predictions can have disastrous consequences — think of a self-driving car that fails to identify a person crossing the street.

    One way to find out the uncertainty of a model is to run the same data through multiple versions of it. 

    For this project, the researchers had multiple neural networks predict the potential energy surface from the same data. Where the network is fairly sure of the prediction, the variation between the outputs of different networks is minimal and the surfaces largely converge. When the network is uncertain, the predictions of different models vary widely, producing a range of outputs, any of which could be the correct surface. 

    The spread in the predictions of a “committee of neural networks” is the “uncertainty” at that point. A good model should not just indicate the best prediction, but also indicates the uncertainty about each of these predictions. It’s like the neural network says “this property for material A will have a value of X and I’m highly confident about it.”

    This could have been an elegant solution but for the sheer scale of the combinatorial space. “Each simulation (which is ground feed for the neural network) may take from tens to thousands of CPU hours,” explains Schwalbe-Koda. For the results to be meaningful, multiple models must be run over a sufficient number of points in the PES, an extremely time-consuming process. 

    Instead, the new approach only samples data points from regions of low prediction confidence, corresponding to specific geometries of a molecule. These molecules are then stretched or deformed slightly so that the uncertainty of the neural network committee is maximized. Additional data are computed for these molecules through simulations and then added to the initial training pool. 

    The neural networks are trained again, and a new set of uncertainties are calculated. This process is repeated until the uncertainty associated with various points on the surface becomes well-defined and cannot be decreased any further. 

    Gómez-Bombarelli explains, “We aspire to have a model that is perfect in the regions we care about (i.e., the ones that the simulation will visit) without having had to run the full ML simulation, by making sure that we make it very good in high-likelihood regions where it isn’t.”

    The paper presents several examples of this approach, including predicting complex supramolecular interactions in zeolites. These materials are cavernous crystals that act as molecular sieves with high shape selectivity. They find applications in catalysis, gas separation, and ion exchange, among others.

    Because performing simulations of large zeolite structures is very costly, the researchers show how their method can provide significant savings in computational simulations. They used more than 15,000 examples to train a neural network to predict the potential energy surfaces for these systems. Despite the large cost required to generate the dataset, the final results are mediocre, with only around 80 percent of the neural network-based simulations being successful. To improve the performance of the model using traditional active learning methods, the researchers calculated an additional 5,000 data points, which improved the performance of the neural network potentials to 92 percent.

    However, when the adversarial approach is used to retrain the neural networks, the authors saw a performance jump to 97 percent using only 500 extra points. That’s a remarkable result, the researchers say, especially considering that each of these extra points takes hundreds of CPU hours. 

    This could be the most realistic method to probe the limits of models that researchers use to predict the behavior of materials and the progress of chemical reactions. More