More stories

  • in

    Differences in T cells’ functional states determine resistance to cancer therapy

    Non-small cell lung cancer (NSCLC) is the most common type of lung cancer in humans. Some patients with NSCLC receive a therapy called immune checkpoint blockade (ICB) that helps kill cancer cells by reinvigorating a subset of immune cells called T cells, which are “exhausted” and have stopped working. However, only about 35 percent of NSCLC patients respond to ICB therapy. Stefani Spranger’s lab at the MIT Department of Biology explores the mechanisms behind this resistance, with the goal of inspiring new therapies to better treat NSCLC patients. In a new study published on Oct. 29 in Science Immunology, a team led by Spranger lab postdoc Brendan Horton revealed what causes T cells to be non-responsive to ICB — and suggests a possible solution.

    Scientists have long thought that the conditions within a tumor were responsible for determining when T cells stop working and become exhausted after being overstimulated or working for too long to fight a tumor. That’s why physicians prescribe ICB to treat cancer — ICB can invigorate the exhausted T cells within a tumor. However, Horton’s new experiments show that some ICB-resistant T cells stop working before they even enter the tumor. These T cells are not actually exhausted, but rather they become dysfunctional due to changes in gene expression that arise early during the activation of a T cell, which occurs in lymph nodes. Once activated, T cells differentiate into certain functional states, which are distinguishable by their unique gene expression patterns.

    The notion that the dysfunctional state that leads to ICB resistance arises before T cells enter the tumor is quite novel, says Spranger, the Howard S. and Linda B. Stern Career Development Professor, a member of the Koch Institute for Integrative Cancer Research, and the study’s senior author.

    “We show that this state is actually a preset condition, and that the T cells are already non-responsive to therapy before they enter the tumor,” she says. As a result, she explains, ICB therapies that work by reinvigorating exhausted T cells within the tumor are less likely to be effective. This suggests that combining ICB with other forms of immunotherapy that target T cells differently might be a more effective approach to help the immune system combat this subset of lung cancer.

    In order to determine why some tumors are resistant to ICB, Horton and the research team studied T cells in murine models of NSCLC. The researchers sequenced messenger RNA from the responsive and non-responsive T cells in order to identify any differences between the T cells. Supported in part by the Koch Institute Frontier Research Program, they used a technique called Seq-Well, developed in the lab of fellow Koch Institute member J. Christopher Love, the Raymond A. (1921) and Helen E. St. Laurent Professor of Chemical Engineering and a co-author of the study. The technique allows for the rapid gene expression profiling of single cells, which permitted Spranger and Horton to get a very granular look at the gene expression patterns of the T cells they were studying.

    Seq-Well revealed distinct patterns of gene expression between the responsive and non-responsive T cells. These differences, which are determined when the T cells assume their specialized functional states, may be the underlying cause of ICB resistance.

    Now that Horton and his colleagues had a possible explanation for why some T cells did not respond to ICB, they decided to see if they could help the ICB-resistant T cells kill the tumor cells. When analyzing the gene expression patterns of the non-responsive T cells, the researchers had noticed that these T cells had a lower expression of receptors for certain cytokines, small proteins that control immune system activity. To counteract this, the researchers treated lung tumors in murine models with extra cytokines. As a result, the previously non-responsive T cells were then able to fight the tumors — meaning that the cytokine therapy prevented, and potentially even reversed, the dysfunctionality.

    Administering cytokine therapy to human patients is not currently safe, because cytokines can cause serious side effects as well as a reaction called a “cytokine storm,” which can produce severe fevers, inflammation, fatigue, and nausea. However, there are ongoing efforts to figure out how to safely administer cytokines to specific tumors. In the future, Spranger and Horton suspect that cytokine therapy could be used in combination with ICB.

    “This is potentially something that could be translated into a therapeutic that could increase the therapy response rate in non-small cell lung cancer,” Horton says.

    Spranger agrees that this work will help researchers develop more innovative cancer therapies, especially because researchers have historically focused on T cell exhaustion rather than the earlier role that T cell functional states might play in cancer.

    “If T cells are rendered dysfunctional early on, ICB is not going to be effective, and we need to think outside the box,” she says. “There’s more evidence, and other labs are now showing this as well, that the functional state of the T cell actually matters quite substantially in cancer therapies.” To Spranger, this means that cytokine therapy “might be a therapeutic avenue” for NSCLC patients beyond ICB.

    Jeffrey Bluestone, the A.W. and Mary Margaret Clausen Distinguished Professor of Metabolism and Endocrinology at the University of California-San Francisco, who was not involved with the paper, agrees. “The study provides a potential opportunity to ‘rescue’ immunity in the NSCLC non-responder patients with appropriate combination therapies,” he says.

    This research was funded by the Pew-Stewart Scholars for Cancer Research, the Ludwig Center for Molecular Oncology, the Koch Institute Frontier Research Program through the Kathy and Curt Mable Cancer Research Fund, and the National Cancer Institute. More

  • in

    Taming the data deluge

    An oncoming tsunami of data threatens to overwhelm huge data-rich research projects on such areas that range from the tiny neutrino to an exploding supernova, as well as the mysteries deep within the brain. 

    When LIGO picks up a gravitational-wave signal from a distant collision of black holes and neutron stars, a clock starts ticking for capturing the earliest possible light that may accompany them: time is of the essence in this race. Data collected from electrical sensors monitoring brain activity are outpacing computing capacity. Information from the Large Hadron Collider (LHC)’s smashed particle beams will soon exceed 1 petabit per second. 

    To tackle this approaching data bottleneck in real-time, a team of researchers from nine institutions led by the University of Washington, including MIT, has received $15 million in funding to establish the Accelerated AI Algorithms for Data-Driven Discovery (A3D3) Institute. From MIT, the research team includes Philip Harris, assistant professor of physics, who will serve as the deputy director of the A3D3 Institute; Song Han, assistant professor of electrical engineering and computer science, who will serve as the A3D3’s co-PI; and Erik Katsavounidis, senior research scientist with the MIT Kavli Institute for Astrophysics and Space Research.

    Infused with this five-year Harnessing the Data Revolution Big Idea grant, and jointly funded by the Office of Advanced Cyberinfrastructure, A3D3 will focus on three data-rich fields: multi-messenger astrophysics, high-energy particle physics, and brain imaging neuroscience. By enriching AI algorithms with new processors, A3D3 seeks to speed up AI algorithms for solving fundamental problems in collider physics, neutrino physics, astronomy, gravitational-wave physics, computer science, and neuroscience. 

    “I am very excited about the new Institute’s opportunities for research in nuclear and particle physics,” says Laboratory for Nuclear Science Director Boleslaw Wyslouch. “Modern particle detectors produce an enormous amount of data, and we are looking for extraordinarily rare signatures. The application of extremely fast processors to sift through these mountains of data will make a huge difference in what we will measure and discover.”

    The seeds of A3D3 were planted in 2017, when Harris and his colleagues at Fermilab and CERN decided to integrate real-time AI algorithms to process the incredible rates of data at the LHC. Through email correspondence with Han, Harris’ team built a compiler, HLS4ML, that could run an AI algorithm in nanoseconds.

    “Before the development of HLS4ML, the fastest processing that we knew of was roughly a millisecond per AI inference, maybe a little faster,” says Harris. “We realized all the AI algorithms were designed to solve much slower problems, such as image and voice recognition. To get to nanosecond inference timescales, we recognized we could make smaller algorithms and rely on custom implementations with Field Programmable Gate Array (FPGA) processors in an approach that was largely different from what others were doing.”

    A few months later, Harris presented their research at a physics faculty meeting, where Katsavounidis became intrigued. Over coffee in Building 7, they discussed combining Harris’ FPGA with Katsavounidis’s use of machine learning for finding gravitational waves. FPGAs and other new processor types, such as graphics processing units (GPUs), accelerate AI algorithms to more quickly analyze huge amounts of data.

    “I had worked with the first FPGAs that were out in the market in the early ’90s and have witnessed first-hand how they revolutionized front-end electronics and data acquisition in big high-energy physics experiments I was working on back then,” recalls Katsavounidis. “The ability to have them crunch gravitational-wave data has been in the back of my mind since joining LIGO over 20 years ago.”

    Two years ago they received their first grant, and the University of Washington’s Shih-Chieh Hsu joined in. The team initiated the Fast Machine Lab, published about 40 papers on the subject, built the group to about 50 researchers, and “launched a whole industry of how to explore a region of AI that has not been explored in the past,” says Harris. “We basically started this without any funding. We’ve been getting small grants for various projects over the years. A3D3 represents our first large grant to support this effort.”  

    “What makes A3D3 so special and suited to MIT is its exploration of a technical frontier, where AI is implemented not in high-level software, but rather in lower-level firmware, reconfiguring individual gates to address the scientific question at hand,” says Rob Simcoe, director of MIT Kavli Institute for Astrophysics and Space Research and the Francis Friedman Professor of Physics. “We are in an era where experiments generate torrents of data. The acceleration gained from tailoring reprogrammable, bespoke computers at the processor level can advance real-time analysis of these data to new levels of speed and sophistication.”

    The Huge Data from the Large Hadron Collider 

    With data rates already exceeding 500 terabits per second, the LHC processes more data than any other scientific instrument on earth. Its future aggregate data rates will soon exceed 1 petabit per second, the biggest data rate in the world. 

    “Through the use of AI, A3D3 aims to perform advanced analyses, such as anomaly detection, and particle reconstruction on all collisions happening 40 million times per second,” says Harris.

    The goal is to find within all of this data a way to identify the few collisions out of the 3.2 billion collisions per second that could reveal new forces, explain how dark matter is formed, and complete the picture of how fundamental forces interact with matter. Processing all of this information requires a customized computing system capable of interpreting the collider information within ultra-low latencies.  

    “The challenge of running this on all of the 100s of terabits per second in real-time is daunting and requires a complete overhaul of how we design and implement AI algorithms,” says Harris. “With large increases in the detector resolution leading to data rates that are even larger the challenge of finding the one collision, among many, will become even more daunting.” 

    The Brain and the Universe

    Thanks to advances in techniques such as medical imaging and electrical recordings from implanted electrodes, neuroscience is also gathering larger amounts of data on how the brain’s neural networks process responses to stimuli and perform motor information. A3D3 plans to develop and implement high-throughput and low-latency AI algorithms to process, organize, and analyze massive neural datasets in real time, to probe brain function in order to enable new experiments and therapies.   

    With Multi-Messenger Astrophysics (MMA), A3D3 aims to quickly identify astronomical events by efficiently processing data from gravitational waves, gamma-ray bursts, and neutrinos picked up by telescopes and detectors. 

    The A3D3 researchers also include a multi-disciplinary group of 15 other researchers, including project lead the University of Washington, along with Caltech, Duke University, Purdue University, UC San Diego, University of Illinois Urbana-Champaign, University of Minnesota, and the University of Wisconsin-Madison. It will include neutrinos research at Icecube and DUNE, and visible astronomy at Zwicky Transient Facility, and will organize deep-learning workshops and boot camps to train students and researchers on how to contribute to the framework and widen the use of fast AI strategies.

    “We have reached a point where detector network growth will be transformative, both in terms of event rates and in terms of astrophysical reach and ultimately, discoveries,” says Katsavounidis. “‘Fast’ and ‘efficient’ is the only way to fight the ‘faint’ and ‘fuzzy’ that is out there in the universe, and the path for getting the most out of our detectors. A3D3 on one hand is going to bring production-scale AI to gravitational-wave physics and multi-messenger astronomy; but on the other hand, we aspire to go beyond our immediate domains and become the go-to place across the country for applications of accelerated AI to data-driven disciplines.” More

  • in

    Exploring the human stories behind the data

    Shaking in the back of a police cruiser, handcuffs digging into his wrists, Brian Williams was overwhelmed with fear. He had been pulled over, but before he was asked for his name, license, or registration, a police officer ordered him out of his car and into back of the police cruiser, saying into his radio, “Black male detained.” The officer’s explanation for these actions was: “for your safety and mine.”

    Williams walked away from the experience with two tickets, a pair of bruised wrists, and a desire to do everything in his power to prevent others from experiencing the utter powerlessness he had felt.

    Now an MIT senior majoring in biological engineering and minoring in Black studies, Williams has continued working to empower his community. Through experiences in and out of the classroom, he has leveraged his background in bioengineering to explore interests in public health and social justice, specifically looking at how the medical sector can uplift and support communities of color.

    Williams grew up in a close-knit family and community in Broward County, Florida, where he found comfort in the routine of Sunday church services, playing outside with friends, and cookouts on the weekends. Broward County was home to him — a home he felt deeply invested in and indebted to.

    “It takes a village. The Black community has invested a lot in me, and I have a lot to invest back in it,” he says.

    Williams initially focused on STEM subjects at MIT, but in his sophomore year, his interests in exploring data science and humanities research led him to an Undergraduate Research Opportunities Program (UROP) project in the Department of Political Science. Working with Professor Ariel White, he analyzed information on incarceration and voting rights, studied the behavior patterns of police officers, and screened 911 calls to identify correlations between how people described events to how the police responded to them.

    In the summer before his junior year, Williams also joined MIT’s Civic Data Design Lab, where he worked as a researcher for the Missing Data Project, which uses both journalism and data science to visualize statistics and humanize the people behind the numbers. As the project’s name suggests, there is often much to be learned from seeking out data that aren’t easily available. Using datasets and interviews describing how the pandemic affected Black communities, Williams and a team of researchers created a series called the Color of Covid, which told the stories behind the grim statistics on race and the pandemic.

    The following year, Williams undertook a research-and-development internship with the biopharmaceutical company Amgen in San Francisco, working on protein engineering to combat autoimmune diseases. Because this work was primarily in the lab, focusing on science-based applications, he saw it as an opportunity to ask himself: “Do I want to dedicate my life to this area of bioengineering?” He found the issue of social justice to be more compelling.

    At the same time, Williams was drawn toward tackling problems the local Black community was experiencing related to the pandemic. He found himself thinking deeply about how to educate the public, address disparities in case rates, and, above all, help people.

    Working through Amgen’s Black Employee Resource Group and its Diversity, Inclusion, and Belonging Team, Williams crafted a proposal, which the company adopted, for addressing Covid-19 vaccination misinformation in Black and Brown communities in San Mateo and San Francisco County. He paid special attention to how to frame vaccine hesitancy among members of these communities, understanding that a longstanding history of racism in scientific discovery and medicine led many Black and Brown people to distrust the entire medical industry.

    “Trying to meet people where they are is important,” Williams says.

    This experience reinforced the idea for Williams that he wanted to do everything in his power to uplift the Black community.

    “I think it’s only right that I go out and I shine bright because it’s not easy being Black. You know, you have to work twice as hard to get half as much,” he says.

    As the current political action co-chair of the MIT Black Students’ Union (BSU), Williams also works to inspire change on campus, promoting and participating in events that uplift the BSU. During his Amgen internship, he also organized the MIT Black History Month Takeover Series, which involved organizing eight events from February through the beginning of spring semester. These included promotions through social media and virtual meetings for students and faculty. For his leadership, he received the “We Are Family” award from the BSU executive board.

    Williams prioritizes community in everything he does, whether in the classroom, at a campus event, or spending time outside in local communities of color around Boston.

    “The things that really keep me going are the stories of other people,” says Williams, who is currently applying to a variety of postgraduate programs. After receiving MIT endorsement, he applied to the Rhodes and Marshall Fellowships; he also plans to apply to law school with a joint master’s degree in public health and policy.

    Ultimately, Williams hopes to bring his fight for racial justice to the policy level, looking at how a long, ongoing history of medical racism has led marginalized communities to mistrust current scientific endeavors. He wants to help bring about new legislation to fix old systems which disproportionately harm communities of color. He says he aims to be “an engineer of social solutions, one who reaches deep into their toolbox of social justice, pulling the levers of activism, advocacy, democracy, and legislation to radically change our world — to improve our social institutions at the root and liberate our communities.” While he understands this is a big feat, he sees his ambition as an asset.

    “I’m just another person with huge aspirations, and an understanding that you have to go get it if you want it,” he says. “You feel me? At the end of the day, this is just the beginning of my story. And I’m grateful to everyone in my life that’s helping me write it. Tap in.” More

  • in

    Making machine learning more useful to high-stakes decision makers

    The U.S. Centers for Disease Control and Prevention estimates that one in seven children in the United States experienced abuse or neglect in the past year. Child protective services agencies around the nation receive a high number of reports each year (about 4.4 million in 2019) of alleged neglect or abuse. With so many cases, some agencies are implementing machine learning models to help child welfare specialists screen cases and determine which to recommend for further investigation.

    But these models don’t do any good if the humans they are intended to help don’t understand or trust their outputs.

    Researchers at MIT and elsewhere launched a research project to identify and tackle machine learning usability challenges in child welfare screening. In collaboration with a child welfare department in Colorado, the researchers studied how call screeners assess cases, with and without the help of machine learning predictions. Based on feedback from the call screeners, they designed a visual analytics tool that uses bar graphs to show how specific factors of a case contribute to the predicted risk that a child will be removed from their home within two years.

    The researchers found that screeners are more interested in seeing how each factor, like the child’s age, influences a prediction, rather than understanding the computational basis of how the model works. Their results also show that even a simple model can cause confusion if its features are not described with straightforward language.

    These findings could be applied to other high-risk fields where humans use machine learning models to help them make decisions, but lack data science experience, says senior author Kalyan Veeramachaneni, principal research scientist in the Laboratory for Information and Decision Systems (LIDS) and senior author of the paper.

    “Researchers who study explainable AI, they often try to dig deeper into the model itself to explain what the model did. But a big takeaway from this project is that these domain experts don’t necessarily want to learn what machine learning actually does. They are more interested in understanding why the model is making a different prediction than what their intuition is saying, or what factors it is using to make this prediction. They want information that helps them reconcile their agreements or disagreements with the model, or confirms their intuition,” he says.

    Co-authors include electrical engineering and computer science PhD student Alexandra Zytek, who is the lead author; postdoc Dongyu Liu; and Rhema Vaithianathan, professor of economics and director of the Center for Social Data Analytics at the Auckland University of Technology and professor of social data analytics at the University of Queensland. The research will be presented later this month at the IEEE Visualization Conference.

    Real-world research

    The researchers began the study more than two years ago by identifying seven factors that make a machine learning model less usable, including lack of trust in where predictions come from and disagreements between user opinions and the model’s output.

    With these factors in mind, Zytek and Liu flew to Colorado in the winter of 2019 to learn firsthand from call screeners in a child welfare department. This department is implementing a machine learning system developed by Vaithianathan that generates a risk score for each report, predicting the likelihood the child will be removed from their home. That risk score is based on more than 100 demographic and historic factors, such as the parents’ ages and past court involvements.

    “As you can imagine, just getting a number between one and 20 and being told to integrate this into your workflow can be a bit challenging,” Zytek says.

    They observed how teams of screeners process cases in about 10 minutes and spend most of that time discussing the risk factors associated with the case. That inspired the researchers to develop a case-specific details interface, which shows how each factor influenced the overall risk score using color-coded, horizontal bar graphs that indicate the magnitude of the contribution in a positive or negative direction.

    Based on observations and detailed interviews, the researchers built four additional interfaces that provide explanations of the model, including one that compares a current case to past cases with similar risk scores. Then they ran a series of user studies.

    The studies revealed that more than 90 percent of the screeners found the case-specific details interface to be useful, and it generally increased their trust in the model’s predictions. On the other hand, the screeners did not like the case comparison interface. While the researchers thought this interface would increase trust in the model, screeners were concerned it could lead to decisions based on past cases rather than the current report.   

    “The most interesting result to me was that, the features we showed them — the information that the model uses — had to be really interpretable to start. The model uses more than 100 different features in order to make its prediction, and a lot of those were a bit confusing,” Zytek says.

    Keeping the screeners in the loop throughout the iterative process helped the researchers make decisions about what elements to include in the machine learning explanation tool, called Sibyl.

    As they refined the Sibyl interfaces, the researchers were careful to consider how providing explanations could contribute to some cognitive biases, and even undermine screeners’ trust in the model.

    For instance, since explanations are based on averages in a database of child abuse and neglect cases, having three past abuse referrals may actually decrease the risk score of a child, since averages in this database may be far higher. A screener may see that explanation and decide not to trust the model, even though it is working correctly, Zytek explains. And because humans tend to put more emphasis on recent information, the order in which the factors are listed could also influence decisions.

    Improving interpretability

    Based on feedback from call screeners, the researchers are working to tweak the explanation model so the features that it uses are easier to explain.

    Moving forward, they plan to enhance the interfaces they’ve created based on additional feedback and then run a quantitative user study to track the effects on decision making with real cases. Once those evaluations are complete, they can prepare to deploy Sibyl, Zytek says.

    “It was especially valuable to be able to work so actively with these screeners. We got to really understand the problems they faced. While we saw some reservations on their part, what we saw more of was excitement about how useful these explanations were in certain cases. That was really rewarding,” she says.

    This work is supported, in part, by the National Science Foundation. More

  • in

    One autonomous taxi, please

    If you don’t get seasick, an autonomous boat might be the right mode of transportation for you. 

    Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Senseable City Laboratory, together with Amsterdam Institute for Advanced Metropolitan Solutions (AMS Institute) in the Netherlands, have now created the final project in their self-navigating trilogy: a full-scale, fully autonomous robotic boat that’s ready to be deployed along the canals of Amsterdam. 

    “Roboat” has come a long way since the team first started prototyping small vessels in the MIT pool in late 2015. Last year, the team released their half-scale, medium model that was 2 meters long and demonstrated promising navigational prowess. 

    This year, two full-scale Roboats were launched, proving more than just proof-of-concept: these craft can comfortably carry up to five people, collect waste, deliver goods, and provide on-demand infrastructure. 

    The boat looks futuristic — it’s a sleek combination of black and gray with two seats that face each other, with orange block letters on the sides that illustrate the makers’ namesakes. It’s a fully electrical boat with a battery that’s the size of a small chest, enabling up to 10 hours of operation and wireless charging capabilities. 

    Play video

    Autonomous Roboats set sea in the Amsterdam canals and can comfortably carry up to five people, collect waste, deliver goods, and provide on-demand infrastructure.

    “We now have higher precision and robustness in the perception, navigation, and control systems, including new functions, such as close-proximity approach mode for latching capabilities, and improved dynamic positioning, so the boat can navigate real-world waters,” says Daniela Rus, MIT professor of electrical engineering and computer science and director of CSAIL. “Roboat’s control system is adaptive to the number of people in the boat.” 

    To swiftly navigate the bustling waters of Amsterdam, Roboat needs a meticulous fusion of proper navigation, perception, and control software. 

    Using GPS, the boat autonomously decides on a safe route from A to B, while continuously scanning the environment to  avoid collisions with objects, such as bridges, pillars, and other boats.

    To autonomously determine a free path and avoid crashing into objects, Roboat uses lidar and a number of cameras to enable a 360-degree view. This bundle of sensors is referred to as the “perception kit” and lets Roboat understand its surroundings. When the perception picks up an unseen object, like a canoe, for example, the algorithm flags the item as “unknown.” When the team later looks at the collected data from the day, the object is manually selected and can be tagged as “canoe.” 

    The control algorithms — similar to ones used for self-driving cars — function a little like a coxswain giving orders to rowers, by translating a given path into instructions toward the “thrusters,” which are the propellers that help the boat move.  

    If you think the boat feels slightly futuristic, its latching mechanism is one of its most impressive feats: small cameras on the boat guide it to the docking station, or other boats, when they detect specific QR codes. “The system allows Roboat to connect to other boats, and to the docking station, to form temporary bridges to alleviate traffic, as well as floating stages and squares, which wasn’t possible with the last iteration,” says Carlo Ratti, professor of the practice in the MIT Department of Urban Studies and Planning (DUSP) and director of the Senseable City Lab. 

    Roboat, by design, is also versatile. The team created a universal “hull” design — that’s the part of the boat that rides both in and on top of the water. While regular boats have unique hulls, designed for specific purposes, Roboat has a universal hull design where the base is the same, but the top decks can be switched out depending on the use case.

    “As Roboat can perform its tasks 24/7, and without a skipper on board, it adds great value for a city. However, for safety reasons it is questionable if reaching level A autonomy is desirable,” says Fabio Duarte, a principal research scientist in DUSP and lead scientist on the project. “Just like a bridge keeper, an onshore operator will monitor Roboat remotely from a control center. One operator can monitor over 50 Roboat units, ensuring smooth operations.”

    The next step for Roboat is to pilot the technology in the public domain. “The historic center of Amsterdam is the perfect place to start, with its capillary network of canals suffering from contemporary challenges, such as mobility and logistics,” says Stephan van Dijk, director of innovation at AMS Institute. 

    Previous iterations of Roboat have been presented at the IEEE International Conference on Robotics and Automation. The boats will be unveiled on Oct. 28 in the waters of Amsterdam. 

    Ratti, Rus, Duarte, and Dijk worked on the project alongside Andrew Whittle, MIT’s Edmund K Turner Professor in civil and environmental engineering; Dennis Frenchman, professor at MIT’s Department of Urban Studies and Planning; and Ynse Deinema of AMS Institute. The full team can be found at Roboat’s website. The project is a joint collaboration with AMS Institute. The City of Amsterdam is a project partner. More

  • in

    New integrative computational neuroscience center established at MIT’s McGovern Institute

    With the tools of modern neuroscience, researchers can peer into the brain with unprecedented accuracy. Recording devices listen in on the electrical conversations between neurons, picking up the voices of hundreds of cells at a time. Genetic tools allow us to focus on specific types of neurons based on their molecular signatures. Microscopes zoom in to illuminate the brain’s circuitry, capturing thousands of images of elaborately branched dendrites. Functional MRIs detect changes in blood flow to map activity within a person’s brain, generating a complete picture by compiling hundreds of scans.

    This deluge of data provides insights into brain function and dynamics at different levels — molecules, cells, circuits, and behavior — but the insights remain compartmentalized in separate research silos for each level. An innovative new center at MIT’s McGovern Institute for Brain Research aims to leverage them into powerful revelations of the brain’s inner workings.

    The K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center will create advanced mathematical models and computational tools to synthesize the deluge of data across scales and advance our understanding of the brain and mental health.

    The center, funded by a $24 million donation from philanthropist Lisa Yang and led by McGovern Institute Associate Investigator Ila Fiete, will take a collaborative approach to computational neuroscience, integrating cutting-edge modeling techniques and data from MIT labs to explain brain function at every level, from the molecular to the behavioral.

    “Our goal is that sophisticated, truly integrated computational models of the brain will make it possible to identify how ‘control knobs’ such as genes, proteins, chemicals, and environment drive thoughts and behavior, and to make inroads toward urgent unmet needs in understanding and treating brain disorders,” says Fiete, who is also a brain and cognitive sciences professor at MIT.

    “Driven by technologies that generate massive amounts of data, we are entering a new era of translational neuroscience research,” says Yang, whose philanthropic investment in MIT research now exceeds $130 million. “I am confident that the multidisciplinary expertise convened by the ICoN center will revolutionize how we synthesize this data and ultimately understand the brain in health and disease.”

    Connecting the data

    It is impossible to separate the molecules in the brain from their effects on behavior — although those aspects of neuroscience have traditionally been studied independently, by researchers with vastly different expertise. The ICoN Center will eliminate the divides, bringing together neuroscientists and software engineers to deal with all types of data about the brain.

    “The center’s highly collaborative structure, which is essential for unifying multiple levels of understanding, will enable us to recruit talented young scientists eager to revolutionize the field of computational neuroscience,” says Robert Desimone, director of the McGovern Institute. “It is our hope that the ICoN Center’s unique research environment will truly demonstrate a new academic research structure that catalyzes bold, creative research.”

    To foster interdisciplinary collaboration, every postdoc and engineer at the center will work with multiple faculty mentors. In order to attract young scientists and engineers to the field of computational neuroscience, the center will also provide four graduate fellowships to MIT students each year in perpetuity. Interacting closely with three scientific cores, engineers and fellows will develop computational models and technologies for analyzing molecular data, neural circuits, and behavior, such as tools to identify patterns in neural recordings or automate the analysis of human behavior to aid psychiatric diagnoses. These technologies and models will be instrumental in synthesizing data into knowledge and understanding.

    Center priorities

    In its first five years, the ICoN Center will prioritize four areas of investigation: episodic memory and exploration, including functions like navigation and spatial memory; complex or stereotypical behavior, such as the perseverative behaviors associated with autism and obsessive-compulsive disorder; cognition and attention; and sleep. Models of complex behavior will be created in collaboration with clinicians and researchers at Children’s Hospital of Philadelphia.

    The goal, Fiete says, is to model the neuronal interactions that underlie these functions so that researchers can predict what will happen when something changes — when certain neurons become more active or when a genetic mutation is introduced, for example. When paired with experimental data from MIT labs, the center’s models will help explain not just how these circuits work, but also how they are altered by genes, the environment, aging, and disease. These focus areas encompass circuits and behaviors often affected by psychiatric disorders and neurodegeneration, and models will give researchers new opportunities to explore their origins and potential treatment strategies.

    “Lisa Yang is focused on helping the scientific community realize its goals in translational research,” says Nergis Mavalvala, dean of the School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “With her generous support, we can accelerate the pace of research by connecting the data to the delivery of tangible results.” More

  • in

    Deep learning helps predict traffic crashes before they happen

    Today’s world is one big maze, connected by layers of concrete and asphalt that afford us the luxury of navigation by vehicle. For many of our road-related advancements — GPS lets us fire fewer neurons thanks to map apps, cameras alert us to potentially costly scrapes and scratches, and electric autonomous cars have lower fuel costs — our safety measures haven’t quite caught up. We still rely on a steady diet of traffic signals, trust, and the steel surrounding us to safely get from point A to point B. 

    To get ahead of the uncertainty inherent to crashes, scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Center for Artificial Intelligence developed a deep learning model that predicts very high-resolution crash risk maps. Fed on a combination of historical crash data, road maps, satellite imagery, and GPS traces, the risk maps describe the expected number of crashes over a period of time in the future, to identify high-risk areas and predict future crashes. 

    Typically, these types of risk maps are captured at much lower resolutions that hover around hundreds of meters, which means glossing over crucial details since the roads become blurred together. These maps, though, are 5×5 meter grid cells, and the higher resolution brings newfound clarity: The scientists found that a highway road, for example, has a higher risk than nearby residential roads, and ramps merging and exiting the highway have an even higher risk than other roads. 

    “By capturing the underlying risk distribution that determines the probability of future crashes at all places, and without any historical data, we can find safer routes, enable auto insurance companies to provide customized insurance plans based on driving trajectories of customers, help city planners design safer roads, and even predict future crashes,” says MIT CSAIL PhD student Songtao He, a lead author on a new paper about the research. 

    Even though car crashes are sparse, they cost about 3 percent of the world’s GDP and are the leading cause of death in children and young adults. This sparsity makes inferring maps at such a high resolution a tricky task. Crashes at this level are thinly scattered — the average annual odds of a crash in a 5×5 grid cell is about one-in-1,000 — and they rarely happen at the same location twice. Previous attempts to predict crash risk have been largely “historical,” as an area would only be considered high-risk if there was a previous nearby crash. 

    The team’s approach casts a wider net to capture critical data. It identifies high-risk locations using GPS trajectory patterns, which give information about density, speed, and direction of traffic, and satellite imagery that describes road structures, such as the number of lanes, whether there’s a shoulder, or if there’s a large number of pedestrians. Then, even if a high-risk area has no recorded crashes, it can still be identified as high-risk, based on its traffic patterns and topology alone. 

    To evaluate the model, the scientists used crashes and data from 2017 and 2018, and tested its performance at predicting crashes in 2019 and 2020. Many locations were identified as high-risk, even though they had no recorded crashes, and also experienced crashes during the follow-up years.

    “Our model can generalize from one city to another by combining multiple clues from seemingly unrelated data sources. This is a step toward general AI, because our model can predict crash maps in uncharted territories,” says Amin Sadeghi, a lead scientist at Qatar Computing Research Institute (QCRI) and an author on the paper. “The model can be used to infer a useful crash map even in the absence of historical crash data, which could translate to positive use for city planning and policymaking by comparing imaginary scenarios.” 

    The dataset covered 7,500 square kilometers from Los Angeles, New York City, Chicago and Boston. Among the four cities, L.A. was the most unsafe, since it had the highest crash density, followed by New York City, Chicago, and Boston. 

    “If people can use the risk map to identify potentially high-risk road segments, they can take action in advance to reduce the risk of trips they take. Apps like Waze and Apple Maps have incident feature tools, but we’re trying to get ahead of the crashes — before they happen,” says He. 

    He and Sadeghi wrote the paper alongside Sanjay Chawla, research director at QCRI, and MIT professors of electrical engineering and computer science Mohammad Alizadeh, ​​Hari Balakrishnan, and Sam Madden. They will present the paper at the 2021 International Conference on Computer Vision. More

  • in

    Making data visualizations more accessible

    In the early days of the Covid-19 pandemic, the Centers for Disease Control and Prevention produced a simple chart to illustrate how measures like mask wearing and social distancing could “flatten the curve” and reduce the peak of infections.

    The chart was amplified by news sites and shared on social media platforms, but it often lacked a corresponding text description to make it accessible for blind individuals who use a screen reader to navigate the web, shutting out many of the 253 million people worldwide who have visual disabilities.

    This alternative text is often missing from online charts, and even when it is included, it is frequently uninformative or even incorrect, according to qualitative data gathered by scientists at MIT.

    These researchers conducted a study with blind and sighted readers to determine which text is useful to include in a chart description, which text is not, and why. Ultimately, they found that captions for blind readers should focus on the overall trends and statistics in the chart, not its design elements or higher-level insights.

    They also created a conceptual model that can be used to evaluate a chart description, whether the text was generated automatically by software or manually by a human author. Their work could help journalists, academics, and communicators create descriptions that are more effective for blind individuals and guide researchers as they develop better tools to automatically generate captions.

    “Ninety-nine-point-nine percent of images on Twitter lack any kind of description — and that is not hyperbole, that is the actual statistic,” says Alan Lundgard, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper. “Having people manually author those descriptions seems to be difficult for a variety of reasons. Perhaps semiautonomous tools could help with that. But it is crucial to do this preliminary participatory design work to figure out what is the target for these tools, so we are not generating content that is either not useful to its intended audience or, in the worst case, erroneous.”

    Lundgard wrote the paper with senior author Arvind Satyanarayan, an assistant professor of computer science who leads the Visualization Group in CSAIL. The research will be presented at the Institute of Electrical and Electronics Engineers Visualization Conference in October.

    Evaluating visualizations

    To develop the conceptual model, the researchers planned to begin by studying graphs featured by popular online publications such as FiveThirtyEight and NYTimes.com, but they ran into a problem — those charts mostly lacked any textual descriptions. So instead, they collected descriptions for these charts from graduate students in an MIT data visualization class and through an online survey, then grouped the captions into four categories.

    Level 1 descriptions focus on the elements of the chart, such as its title, legend, and colors. Level 2 descriptions describe statistical content, like the minimum, maximum, or correlations. Level 3 descriptions cover perceptual interpretations of the data, like complex trends or clusters. Level 4 descriptions include subjective interpretations that go beyond the data and draw on the author’s knowledge.

    In a study with blind and sighted readers, the researchers presented visualizations with descriptions at different levels and asked participants to rate how useful they were. While both groups agreed that level 1 content on its own was not very helpful, sighted readers gave level 4 content the highest marks while blind readers ranked that content among the least useful.

    Survey results revealed that a majority of blind readers were emphatic that descriptions should not contain an author’s editorialization, but rather stick to straight facts about the data. On the other hand, most sighted readers preferred a description that told a story about the data.

    “For me, a surprising finding about the lack of utility for the highest-level content is that it ties very closely to feelings about agency and control as a disabled person. In our research, blind readers specifically didn’t want the descriptions to tell them what to think about the data. They want the data to be accessible in a way that allows them to interpret it for themselves, and they want to have the agency to do that interpretation,” Lundgard says.

    A more inclusive future

    This work could have implications as data scientists continue to develop and refine machine learning methods for autogenerating captions and alternative text.

    “We are not able to do it yet, but it is not inconceivable to imagine that in the future we would be able to automate the creation of some of this higher-level content and build models that target level 2 or level 3 in our framework. And now we know what the research questions are. If we want to produce these automated captions, what should those captions say? We are able to be a bit more directed in our future research because we have these four levels,” Satyanarayan says.

    In the future, the four-level framework could also help researchers develop machine learning models that can automatically suggest effective visualizations as part of the data analysis process, or models that can extract the most useful information from a chart.

    This research could also inform future work in Satyanarayan’s group that seeks to make interactive visualizations more accessible for blind readers who use a screen reader to access and interpret the information. 

    “The question of how to ensure that charts and graphs are accessible to screen reader users is both a socially important equity issue and a challenge that can advance the state-of-the-art in AI,” says Meredith Ringel Morris, director and principal scientist of the People + AI Research team at Google Research, who was not involved with this study. “By introducing a framework for conceptualizing natural language descriptions of information graphics that is grounded in end-user needs, this work helps ensure that future AI researchers will focus their efforts on problems aligned with end-users’ values.”

    Morris adds: “Rich natural-language descriptions of data graphics will not only expand access to critical information for people who are blind, but will also benefit a much wider audience as eyes-free interactions via smart speakers, chatbots, and other AI-powered agents become increasingly commonplace.”

    This research was supported by the National Science Foundation. More