More stories

  • in

    Study finds the risks of sharing health care data are low

    In recent years, scientists have made great strides in their ability to develop artificial intelligence algorithms that can analyze patient data and come up with new ways to diagnose disease or predict which treatments work best for different patients.

    The success of those algorithms depends on access to patient health data, which has been stripped of personal information that could be used to identify individuals from the dataset. However, the possibility that individuals could be identified through other means has raised concerns among privacy advocates.

    In a new study, a team of researchers led by MIT Principal Research Scientist Leo Anthony Celi has quantified the potential risk of this kind of patient re-identification and found that it is currently extremely low relative to the risk of data breach. In fact, between 2016 and 2021, the period examined in the study, there were no reports of patient re-identification through publicly available health data.

    The findings suggest that the potential risk to patient privacy is greatly outweighed by the gains for patients, who benefit from better diagnosis and treatment, says Celi. He hopes that in the near future, these datasets will become more widely available and include a more diverse group of patients.

    “We agree that there is some risk to patient privacy, but there is also a risk of not sharing data,” he says. “There is harm when data is not shared, and that needs to be factored into the equation.”

    Celi, who is also an instructor at the Harvard T.H. Chan School of Public Health and an attending physician with the Division of Pulmonary, Critical Care and Sleep Medicine at the Beth Israel Deaconess Medical Center, is the senior author of the new study. Kenneth Seastedt, a thoracic surgery fellow at Beth Israel Deaconess Medical Center, is the lead author of the paper, which appears today in PLOS Digital Health.

    Risk-benefit analysis

    Large health record databases created by hospitals and other institutions contain a wealth of information on diseases such as heart disease, cancer, macular degeneration, and Covid-19, which researchers use to try to discover new ways to diagnose and treat disease.

    Celi and others at MIT’s Laboratory for Computational Physiology have created several publicly available databases, including the Medical Information Mart for Intensive Care (MIMIC), which they recently used to develop algorithms that can help doctors make better medical decisions. Many other research groups have also used the data, and others have created similar databases in countries around the world.

    Typically, when patient data is entered into this kind of database, certain types of identifying information are removed, including patients’ names, addresses, and phone numbers. This is intended to prevent patients from being re-identified and having information about their medical conditions made public.

    However, concerns about privacy have slowed the development of more publicly available databases with this kind of information, Celi says. In the new study, he and his colleagues set out to ask what the actual risk of patient re-identification is. First, they searched PubMed, a database of scientific papers, for any reports of patient re-identification from publicly available health data, but found none.

    To expand the search, the researchers then examined media reports from September 2016 to September 2021, using Media Cloud, an open-source global news database and analysis tool. In a search of more than 10,000 U.S. media publications during that time, they did not find a single instance of patient re-identification from publicly available health data.

    In contrast, they found that during the same time period, health records of nearly 100 million people were stolen through data breaches of information that was supposed to be securely stored.

    “Of course, it’s good to be concerned about patient privacy and the risk of re-identification, but that risk, although it’s not zero, is minuscule compared to the issue of cyber security,” Celi says.

    Better representation

    More widespread sharing of de-identified health data is necessary, Celi says, to help expand the representation of minority groups in the United States, who have traditionally been underrepresented in medical studies. He is also working to encourage the development of more such databases in low- and middle-income countries.

    “We cannot move forward with AI unless we address the biases that lurk in our datasets,” he says. “When we have this debate over privacy, no one hears the voice of the people who are not represented. People are deciding for them that their data need to be protected and should not be shared. But they are the ones whose health is at stake; they’re the ones who would most likely benefit from data-sharing.”

    Instead of asking for patient consent to share data, which he says may exacerbate the exclusion of many people who are now underrepresented in publicly available health data, Celi recommends enhancing the existing safeguards that are in place to protect such datasets. One new strategy that he and his colleagues have begun using is to share the data in a way that it can’t be downloaded, and all queries run on it can be monitored by the administrators of the database. This allows them to flag any user inquiry that seems like it might not be for legitimate research purposes, Celi says.

    “What we are advocating for is performing data analysis in a very secure environment so that we weed out any nefarious players trying to use the data for some other reasons apart from improving population health,” he says. “We’re not saying that we should disregard patient privacy. What we’re saying is that we have to also balance that with the value of data sharing.”

    The research was funded by the National Institutes of Health through the National Institute of Biomedical Imaging and Bioengineering. More

  • in

    Four from MIT receive NIH New Innovator Awards for 2022

    The National Institutes of Health (NIH) has awarded grants to four MIT faculty members as part of its High-Risk, High-Reward Research program.

    The program supports unconventional approaches to challenges in biomedical, behavioral, and social sciences. Each year, NIH Director’s Awards are granted to program applicants who propose high-risk, high-impact research in areas relevant to the NIH’s mission. In doing so, the NIH encourages innovative proposals that, due to their inherent risk, might struggle in the traditional peer-review process.

    This year, Lindsay Case, Siniša Hrvatin, Deblina Sarkar, and Caroline Uhler have been chosen to receive the New Innovator Award, which funds exceptionally creative research from early-career investigators. The award, which was established in 2007, supports researchers who are within 10 years of their final degree or clinical residency and have not yet received a research project grant or equivalent NIH grant.

    Lindsay Case, the Irwin and Helen Sizer Department of Biology Career Development Professor and an extramural member of the Koch Institute for Integrative Cancer Research, uses biochemistry and cell biology to study the spatial organization of signal transduction. Her work focuses on understanding how signaling molecules assemble into compartments with unique biochemical and biophysical properties to enable cells to sense and respond to information in their environment. Earlier this year, Case was one of two MIT assistant professors named as Searle Scholars.

    Siniša Hrvatin, who joined the School of Science faculty this past winter, is an assistant professor in the Department of Biology and a core member at the Whitehead Institute for Biomedical Research. He studies how animals and cells enter, regulate, and survive states of dormancy such as torpor and hibernation, aiming to harness the potential of these states therapeutically.

    Deblina Sarkar is an assistant professor and AT&T Career Development Chair Professor at the MIT Media Lab​. Her research combines the interdisciplinary fields of nanoelectronics, applied physics, and biology to invent disruptive technologies for energy-efficient nanoelectronics and merge such next-generation technologies with living matter to create a new paradigm for life-machine symbiosis. Her high-risk, high-reward proposal received the rare perfect impact score of 10, which is the highest score awarded by NIH.

    Caroline Uhler is a professor in the Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society. In addition, she is a core institute member at the Broad Institute of MIT and Harvard, where she co-directs the Eric and Wendy Schmidt Center. By combining machine learning, statistics, and genomics, she develops representation learning and causal inference methods to elucidate gene regulation in health and disease.

    The High-Risk, High-Reward Research program is supported by the NIH Common Fund, which oversees programs that pursue major opportunities and gaps in biomedical research that require collaboration across NIH Institutes and Centers. In addition to the New Innovator Award, the NIH also issues three other awards each year: the Pioneer Award, which supports bold and innovative research projects with unusually broad scientific impact; the Transformative Research Award, which supports risky and untested projects with transformative potential; and the Early Independence Award, which allows especially impressive junior scientists to skip the traditional postdoctoral training program to launch independent research careers.

    This year, the High-Risk, High-Reward Research program is awarding 103 awards, including eight Pioneer Awards, 72 New Innovator Awards, nine Transformative Research Awards, and 14 Early Independence Awards. These 103 awards total approximately $285 million in support from the institutes, centers, and offices across NIH over five years. “The science advanced by these researchers is poised to blaze new paths of discovery in human health,” says Lawrence A. Tabak DDS, PhD, who is performing the duties of the director of NIH. “This unique cohort of scientists will transform what is known in the biological and behavioral world. We are privileged to support this innovative science.” More

  • in

    Learning on the edge

    Microcontrollers, miniature computers that can run simple commands, are the basis for billions of connected devices, from internet-of-things (IoT) devices to sensors in automobiles. But cheap, low-power microcontrollers have extremely limited memory and no operating system, making it challenging to train artificial intelligence models on “edge devices” that work independently from central computing resources.

    Training a machine-learning model on an intelligent edge device allows it to adapt to new data and make better predictions. For instance, training a model on a smart keyboard could enable the keyboard to continually learn from the user’s writing. However, the training process requires so much memory that it is typically done using powerful computers at a data center, before the model is deployed on a device. This is more costly and raises privacy issues since user data must be sent to a central server.

    To address this problem, researchers at MIT and the MIT-IBM Watson AI Lab developed a new technique that enables on-device training using less than a quarter of a megabyte of memory. Other training solutions designed for connected devices can use more than 500 megabytes of memory, greatly exceeding the 256-kilobyte capacity of most microcontrollers (there are 1,024 kilobytes in one megabyte).

    The intelligent algorithms and framework the researchers developed reduce the amount of computation required to train a model, which makes the process faster and more memory efficient. Their technique can be used to train a machine-learning model on a microcontroller in a matter of minutes.

    This technique also preserves privacy by keeping data on the device, which could be especially beneficial when data are sensitive, such as in medical applications. It also could enable customization of a model based on the needs of users. Moreover, the framework preserves or improves the accuracy of the model when compared to other training approaches.

    “Our study enables IoT devices to not only perform inference but also continuously update the AI models to newly collected data, paving the way for lifelong on-device learning. The low resource utilization makes deep learning more accessible and can have a broader reach, especially for low-power edge devices,” says Song Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT-IBM Watson AI Lab, and senior author of the paper describing this innovation.

    Joining Han on the paper are co-lead authors and EECS PhD students Ji Lin and Ligeng Zhu, as well as MIT postdocs Wei-Ming Chen and Wei-Chen Wang, and Chuang Gan, a principal research staff member at the MIT-IBM Watson AI Lab. The research will be presented at the Conference on Neural Information Processing Systems.

    Han and his team previously addressed the memory and computational bottlenecks that exist when trying to run machine-learning models on tiny edge devices, as part of their TinyML initiative.

    Lightweight training

    A common type of machine-learning model is known as a neural network. Loosely based on the human brain, these models contain layers of interconnected nodes, or neurons, that process data to complete a task, such as recognizing people in photos. The model must be trained first, which involves showing it millions of examples so it can learn the task. As it learns, the model increases or decreases the strength of the connections between neurons, which are known as weights.

    The model may undergo hundreds of updates as it learns, and the intermediate activations must be stored during each round. In a neural network, activation is the middle layer’s intermediate results. Because there may be millions of weights and activations, training a model requires much more memory than running a pre-trained model, Han explains.

    Han and his collaborators employed two algorithmic solutions to make the training process more efficient and less memory-intensive. The first, known as sparse update, uses an algorithm that identifies the most important weights to update at each round of training. The algorithm starts freezing the weights one at a time until it sees the accuracy dip to a set threshold, then it stops. The remaining weights are updated, while the activations corresponding to the frozen weights don’t need to be stored in memory.

    “Updating the whole model is very expensive because there are a lot of activations, so people tend to update only the last layer, but as you can imagine, this hurts the accuracy. For our method, we selectively update those important weights and make sure the accuracy is fully preserved,” Han says.

    Their second solution involves quantized training and simplifying the weights, which are typically 32 bits. An algorithm rounds the weights so they are only eight bits, through a process known as quantization, which cuts the amount of memory for both training and inference. Inference is the process of applying a model to a dataset and generating a prediction. Then the algorithm applies a technique called quantization-aware scaling (QAS), which acts like a multiplier to adjust the ratio between weight and gradient, to avoid any drop in accuracy that may come from quantized training.

    The researchers developed a system, called a tiny training engine, that can run these algorithmic innovations on a simple microcontroller that lacks an operating system. This system changes the order of steps in the training process so more work is completed in the compilation stage, before the model is deployed on the edge device.

    “We push a lot of the computation, such as auto-differentiation and graph optimization, to compile time. We also aggressively prune the redundant operators to support sparse updates. Once at runtime, we have much less workload to do on the device,” Han explains.

    A successful speedup

    Their optimization only required 157 kilobytes of memory to train a machine-learning model on a microcontroller, whereas other techniques designed for lightweight training would still need between 300 and 600 megabytes.

    They tested their framework by training a computer vision model to detect people in images. After only 10 minutes of training, it learned to complete the task successfully. Their method was able to train a model more than 20 times faster than other approaches.

    Now that they have demonstrated the success of these techniques for computer vision models, the researchers want to apply them to language models and different types of data, such as time-series data. At the same time, they want to use what they’ve learned to shrink the size of larger models without sacrificing accuracy, which could help reduce the carbon footprint of training large-scale machine-learning models.

    “AI model adaptation/training on a device, especially on embedded controllers, is an open challenge. This research from MIT has not only successfully demonstrated the capabilities, but also opened up new possibilities for privacy-preserving device personalization in real-time,” says Nilesh Jain, a principal engineer at Intel who was not involved with this work. “Innovations in the publication have broader applicability and will ignite new systems-algorithm co-design research.”

    “On-device learning is the next major advance we are working toward for the connected intelligent edge. Professor Song Han’s group has shown great progress in demonstrating the effectiveness of edge devices for training,” adds Jilei Hou, vice president and head of AI research at Qualcomm. “Qualcomm has awarded his team an Innovation Fellowship for further innovation and advancement in this area.”

    This work is funded by the National Science Foundation, the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, Amazon, Intel, Qualcomm, Ford Motor Company, and Google. More

  • in

    Making each vote count

    Graduate student Jacob Jaffe wants to improve the administration of American elections. To do that, he is posing “questions in political science that we haven’t been asking enough,” he says, “and solving them with methods we haven’t been using enough.”

    Considerable research has been devoted to understanding “who votes, and what makes people vote or not vote,” says Jaffe. He is training his attention on questions of a different nature: Does providing practical information to voters about how to cast their ballots change how they will vote? Is it possible to increase the accuracy of vote-counting, on a state-by-state and even precinct-by-precinct basis? How do voters experience polling places? These problems form the core of his dissertation.

    Taking advantage of the resources at the MIT Election Data and Science Lab, where he serves as a researcher, Jaffe conducts novel field experiments to gather highly detailed information on local, state, and federal elections, and analyzes this trove with advanced statistical techniques. Whether investigating the probability of miscounts in voting, or the possibility of changing a voter’s mode of voting, Jaffe intends to strengthen the scaffolding that supports representative government. “Elections are both theoretically and normatively important; they’re the basis of our belief in the moral rightness of the state to do the things the state does,” he says.

    Click this link

    For one of his keystone projects, Jaffe seized a unique opportunity to run a big field experiment. In summer 2020, at the height of the Covid-19 pandemic, he emailed 80,000 Floridians instructions on how to vote in an upcoming primary by mail. His email contained a link enabling recipients to fill out two simple questions to receive a ballot. “I wanted to learn if this was an effective method for getting people to vote by mail, and I proved it is, statistically,” he says. “This is important to know because if elections are held in times when we might need people to vote nonlocally or vote using one method over another — if they’re displaced by a hurricane or another emergency, for instance — I learned that we can effect a new vote mode practically and quickly.”

    One of Jaffe’s insights from this experiment is that “people do read their voting-related emails, but the content of the email has to be something they can act on proximately,” he says. “A message reminding them to vote two weeks from now is not so helpful.” The lower the burden on an individual to participate in voting, whether due to proximity to a polling site or instructions on how to receive and cast a ballot, the greater the likelihood of that person engaging in the election.

    “If we want people to vote by mail, we need to reduce the informational cost so it’s easier for voters to understand how the system works,” he says.

    Another significant research thrust for Jaffe involves scrutinizing accuracy in vote counting, using instances of recounts in presidential elections. Ensuring each vote counts, he says, “is one of the most fundamental questions in democracy,” he says.

    With access to 20 elections in 2020, Jaffe is comparing original vote totals for each candidate to the recounted, correct tally, on a precinct-level basis. “Using original combinatorial techniques, I can estimate the probability of miscounting ballots,” he says. The ultimate goal is to generate a granular picture of the efficacy of election administration across the country.

    “It varies a lot by state, and most states do a good job,” he says. States that take their time in counting perform better. “There’s a phenomenon where some towns race to get results in as quickly as possible, and this affects their accuracy.”

    In spite of the bright spots, Jaffe sees chronic underfunding of American elections. “We need to give local administrators the resources, the time and money to fund employees to do their jobs,” he says. The worse the situation is, “the more likely that elections will be called wrong, with no one knowing.” Jaffe believes that his analysis can offer states useful information for improving election administration. “Determining how good a place is historically at counting ballots can help determine the likelihood of needing costly recounts in future elections,” he says.

    The ballot box and beyond

    It didn’t take Jaffe long to decide on a life dedicated to studying politics. Part of a Boston-area family who, he says, “liked discussing what was going on in the world,” he had his own subscriptions to Time magazine at age 9, and to The Economist in middle school. During high school, he volunteered for then-Massachusetts Representative Barney Frank and Senator John Kerry, working on constituent services. At Rice University, he interned all four years with political scientist Robert M. Stein, an expert on voting and elections. With Stein’s help, Jaffe landed a position the summer before his senior year with the Department of Justice (DOJ), researching voting rights cases.

    “The experience was fascinating, and the work felt super important,” says Jaffe. His portfolio involved determining whether legal challenges to particular elections met the statistical standard for racial gerrymandering. “I had to answer hard quantitative questions about the relationship between race and voting in an area, and whether minority candidates were systematically prevented from winning,” he says.

    But while Jaffe cared a lot about this work, he didn’t feel adequately challenged. “As a 21-year-old at DOJ, I learned that I could address problems in the world using statistics,” he says. “But I felt I could have a greater impact addressing tougher questions outside of voting rights.”

    Jaffe was drawn to political science at MIT, and specifically to the research of Charles Stewart III, the Kenan Sahin Distinguished Professor of Political Science, director of the MIT Election Lab, and head of Jaffe’s thesis committee. It wasn’t just the opportunity to plumb the lab’s singular repository of voting data that attracted Jaffe, but its commitment to making every vote count. For Jaffe, this was a call to arms to investigate the many, and sometimes quotidian, obstacles, between citizens and ballot boxes.

    To this end, he has been analyzing, with the help of mathematical methods from queuing theory, why some elections involve wait lines of six hours and longer at polling sites. “We know that simpler ballots mean people move don’t get stuck in these lines, where they might potentially give up before voting,” he says. “Looking at the content of ballots and the interval between voter check-in and check-out, I learned that adding races, rather than candidates, to a ballot, means that people take more time completing ballots, leading to interminable lines.”

    A key takeaway from his ensemble of studies is that “while it’s relatively rare that elections are bad, we shouldn’t think that we’re good to go,” he says. “Instead, we need to be asking under what conditions do things get bad, and how can we make them better.” More

  • in

    Investigating at the interface of data science and computing

    A visual model of Guy Bresler’s research would probably look something like a Venn diagram. He works at the four-way intersection where theoretical computer science, statistics, probability, and information theory collide.

    “There are always new things to do be done at the interface. There are always opportunities for entirely new questions to ask,” says Bresler, an associate professor who recently earned tenure in MIT’s Department of Electrical Engineering and Computer Science (EECS).

    A theoretician, he aims to understand the delicate interplay between structure in data, the complexity of models, and the amount of computation needed to learn those models. Recently, his biggest focus has been trying to unveil fundamental phenomena that are broadly responsible for determining the computational complexity of statistics problems — and finding the “sweet spot” where available data and computation resources enable researchers to effectively solve a problem.

    When trying to solve a complex statistics problem, there is often a tug-of-war between data and computation. Without enough data, the computation needed to solve a statistical problem can be intractable, or at least consume a staggering amount of resources. But get just enough data and suddenly the intractable becomes solvable; the amount of computation needed to come up with a solution drops dramatically.

    The majority of modern statistical problems exhibits this sort of trade-off between computation and data, with applications ranging from drug development to weather prediction. Another well-studied and practically important example is cryo-electron microscopy, Bresler says. With this technique, researchers use an electron microscope to take images of molecules in different orientations. The central challenge is how to solve the inverse problem — determining the molecule’s structure given the noisy data. Many statistical problems can be formulated as inverse problems of this sort.

    One aim of Bresler’s work is to elucidate relationships between the wide variety of different statistics problems currently being studied. The dream is to classify statistical problems into equivalence classes, as has been done for other types of computational problems in the field of computational complexity. Showing these sorts of relationships means that, instead of trying to understand each problem in isolation, researchers can transfer their understanding from a well-studied problem to a poorly understood one, he says.

    Adopting a theoretical approach

    For Bresler, a desire to theoretically understand various basic phenomena inspired him to follow a path into academia.

    Both of his parents worked as professors and showed how fulfilling academia can be, he says. His earliest introduction to the theoretical side of engineering came from his father, who is an electrical engineer and theoretician studying signal processing. Bresler was inspired by his work from an early age. As an undergraduate at the University of Illinois at Urbana-Champaign, he bounced between physics, math, and computer science courses. But no matter the topic, he gravitated toward the theoretical viewpoint.

    In graduate school at the University of California at Berkeley, Bresler enjoyed the opportunity to work in a wide variety of topics spanning probability, theoretical computer science, and mathematics. His driving motivator was a love of learning new things.

    “Working at the interface of multiple fields with new questions, there is a feeling that one had better learn as much as possible if one is to have any chance of finding the right tools to answer those questions,” he says.

    That curiosity led him to MIT for a postdoc in the Laboratory for Information and Decision Systems (LIDS) in 2013, and then he joined the faculty two years later as an assistant professor in EECS. He was named an associate professor in 2019.

    Bresler says he was drawn to the intellectual atmosphere at MIT, as well as the supportive environment for launching bold research quests and trying to make progress in new areas of study.

    Opportunities for collaboration

    “What really struck me was how vibrant and energetic and collaborative MIT is. I have this mental list of more than 20 people here who I would love to have lunch with every single week and collaborate with on research. So just based on sheer numbers, joining MIT was a clear win,” he says.

    He’s especially enjoyed collaborating with his students, who continually teach him new things and ask deep questions that drive exciting research projects. One such student, Matthew Brennan, who was one of Bresler’s closest collaborators, tragically and unexpectedly passed away in January, 2021.

    The shock from Brennan’s death is still raw for Bresler, and it derailed his research for a time.

    “Beyond his own prodigious capabilities and creativity, he had this amazing ability to listen to an idea of mine that was almost completely wrong, extract from it a useful piece, and then pass the ball back,” he says. “We had the same vision for what we wanted to achieve in the work, and we were driven to try to tell a certain story. At the time, almost nobody was pursuing this particular line of work, and it was in a way kind of lonely. But he trusted me, and we encouraged one another to keep at it when things seemed bleak.”

    Those lessons in perseverance fuel Bresler as he and his students continue exploring questions that, by their nature, are difficult to answer.

    One area he’s worked in on-and-off for over a decade involves learning graphical models from data. Models of certain types of data, such as time-series data consisting of temperature readings, are often constructed by domain experts who have relevant knowledge and can build a reasonable model, he explains.

    But for many types of data with complex dependencies, such as social network or biological data, it is not at all clear what structure a model should take. Bresler’s work seeks to estimate a structured model from data, which could then be used for downstream applications like making recommendations or better predicting the weather.

    The basic question of identifying good models, whether algorithmically in a complex setting or analytically, by specifying a useful toy model for theoretical analysis, connects the abstract work with engineering practice, he says.

    “In general, modeling is an art. Real life is complicated and if you write down some super-complicated model that tries to capture every feature of a problem, it is doomed,” says Bresler. “You have to think about the problem and understand the practical side of things on some level to identify the correct features of the problem to be modeled, so that you can hope to actually solve it and gain insight into what one should do in practice.”

    Outside the lab, Bresler often finds himself solving very different kinds of problems. He is an avid rock climber and spends much of his free time bouldering throughout New England.

    “I really love it. It is a good excuse to get outside and get sucked into a whole different world. Even though there is problem solving involved, and there are similarities at the philosophical level, it is totally orthogonal to sitting down and doing math,” he says. More

  • in

    Neurodegenerative disease can progress in newly identified patterns

    Neurodegenerative diseases — like amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease), Alzheimer’s, and Parkinson’s — are complicated, chronic ailments that can present with a variety of symptoms, worsen at different rates, and have many underlying genetic and environmental causes, some of which are unknown. ALS, in particular, affects voluntary muscle movement and is always fatal, but while most people survive for only a few years after diagnosis, others live with the disease for decades. Manifestations of ALS can also vary significantly; often slower disease development correlates with onset in the limbs and affecting fine motor skills, while the more serious, bulbar ALS impacts swallowing, speaking, breathing, and mobility. Therefore, understanding the progression of diseases like ALS is critical to enrollment in clinical trials, analysis of potential interventions, and discovery of root causes.

    However, assessing disease evolution is far from straightforward. Current clinical studies typically assume that health declines on a downward linear trajectory on a symptom rating scale, and use these linear models to evaluate whether drugs are slowing disease progression. However, data indicate that ALS often follows nonlinear trajectories, with periods where symptoms are stable alternating with periods when they are rapidly changing. Since data can be sparse, and health assessments often rely on subjective rating metrics measured at uneven time intervals, comparisons across patient populations are difficult. These heterogenous data and progression, in turn, complicate analyses of invention effectiveness and potentially mask disease origin.

    Now, a new machine-learning method developed by researchers from MIT, IBM Research, and elsewhere aims to better characterize ALS disease progression patterns to inform clinical trial design.

    “There are groups of individuals that share progression patterns. For example, some seem to have really fast-progressing ALS and others that have slow-progressing ALS that varies over time,” says Divya Ramamoorthy PhD ’22, a research specialist at MIT and lead author of a new paper on the work that was published this month in Nature Computational Science. “The question we were asking is: can we use machine learning to identify if, and to what extent, those types of consistent patterns across individuals exist?”

    Their technique, indeed, identified discrete and robust clinical patterns in ALS progression, many of which are non-linear. Further, these disease progression subtypes were consistent across patient populations and disease metrics. The team additionally found that their method can be applied to Alzheimer’s and Parkinson’s diseases as well.

    Joining Ramamoorthy on the paper are MIT-IBM Watson AI Lab members Ernest Fraenkel, a professor in the MIT Department of Biological Engineering; Research Scientist Soumya Ghosh of IBM Research; and Principal Research Scientist Kenney Ng, also of IBM Research. Additional authors include Kristen Severson PhD ’18, a senior researcher at Microsoft Research and former member of the Watson Lab and of IBM Research; Karen Sachs PhD ’06 of Next Generation Analytics; a team of researchers with Answer ALS; Jonathan D. Glass and Christina N. Fournier of the Emory University School of Medicine; the Pooled Resource Open-Access ALS Clinical Trials Consortium; ALS/MND Natural History Consortium; Todd M. Herrington of Massachusetts General Hospital (MGH) and Harvard Medical School; and James D. Berry of MGH.

    Play video

    MIT Professor Ernest Fraenkel describes early stages of his research looking at root causes of amyotrophic lateral sclerosis (ALS).

    Reshaping health decline

    After consulting with clinicians, the team of machine learning researchers and neurologists let the data speak for itself. They designed an unsupervised machine-learning model that employed two methods: Gaussian process regression and Dirichlet process clustering. These inferred the health trajectories directly from patient data and automatically grouped similar trajectories together without prescribing the number of clusters or the shape of the curves, forming ALS progression “subtypes.” Their method incorporated prior clinical knowledge in the way of a bias for negative trajectories — consistent with expectations for neurodegenerative disease progressions — but did not assume any linearity. “We know that linearity is not reflective of what’s actually observed,” says Ng. “The methods and models that we use here were more flexible, in the sense that, they capture what was seen in the data,” without the need for expensive labeled data and prescription of parameters.

    Primarily, they applied the model to five longitudinal datasets from ALS clinical trials and observational studies. These used the gold standard to measure symptom development: the ALS functional rating scale revised (ALSFRS-R), which captures a global picture of patient neurological impairment but can be a bit of a “messy metric.” Additionally, performance on survivability probabilities, forced vital capacity (a measurement of respiratory function), and subscores of ALSFRS-R, which looks at individual bodily functions, were incorporated.

    New regimes of progression and utility

    When their population-level model was trained and tested on these metrics, four dominant patterns of disease popped out of the many trajectories — sigmoidal fast progression, stable slow progression, unstable slow progression, and unstable moderate progression — many with strong nonlinear characteristics. Notably, it captured trajectories where patients experienced a sudden loss of ability, called a functional cliff, which would significantly impact treatments, enrollment in clinical trials, and quality of life.

    The researchers compared their method against other commonly used linear and nonlinear approaches in the field to separate the contribution of clustering and linearity to the model’s accuracy. The new work outperformed them, even patient-specific models, and found that subtype patterns were consistent across measures. Impressively, when data were withheld, the model was able to interpolate missing values, and, critically, could forecast future health measures. The model could also be trained on one ALSFRS-R dataset and predict cluster membership in others, making it robust, generalizable, and accurate with scarce data. So long as 6-12 months of data were available, health trajectories could be inferred with higher confidence than conventional methods.

    The researchers’ approach also provided insights into Alzheimer’s and Parkinson’s diseases, both of which can have a range of symptom presentations and progression. For Alzheimer’s, the new technique could identify distinct disease patterns, in particular variations in the rates of conversion of mild to severe disease. The Parkinson’s analysis demonstrated a relationship between progression trajectories for off-medication scores and disease phenotypes, such as the tremor-dominant or postural instability/gait difficulty forms of Parkinson’s disease.

    The work makes significant strides to find the signal amongst the noise in the time-series of complex neurodegenerative disease. “The patterns that we see are reproducible across studies, which I don’t believe had been shown before, and that may have implications for how we subtype the [ALS] disease,” says Fraenkel. As the FDA has been considering the impact of non-linearity in clinical trial designs, the team notes that their work is particularly pertinent.

    As new ways to understand disease mechanisms come online, this model provides another tool to pick apart illnesses like ALS, Alzheimer’s, and Parkinson’s from a systems biology perspective.

    “We have a lot of molecular data from the same patients, and so our long-term goal is to see whether there are subtypes of the disease,” says Fraenkel, whose lab looks at cellular changes to understand the etiology of diseases and possible targets for cures. “One approach is to start with the symptoms … and see if people with different patterns of disease progression are also different at the molecular level. That might lead you to a therapy. Then there’s the bottom-up approach, where you start with the molecules” and try to reconstruct biological pathways that might be affected. “We’re going [to be tackling this] from both ends … and finding if something meets in the middle.”

    This research was supported, in part, by the MIT-IBM Watson AI Lab, the Muscular Dystrophy Association, Department of Veterans Affairs of Research and Development, the Department of Defense, NSF Gradate Research Fellowship Program, Siebel Scholars Fellowship, Answer ALS, the United States Army Medical Research Acquisition Activity, National Institutes of Health, and the NIH/NINDS. More

  • in

    New program to support translational research in AI, data science, and machine learning

    The MIT School of Engineering and Pillar VC today announced the MIT-Pillar AI Collective, a one-year pilot program funded by a gift from Pillar VC that will provide seed grants for projects in artificial intelligence, machine learning, and data science with the goal of supporting translational research. The program will support graduate students and postdocs through access to funding, mentorship, and customer discovery.

    Administered by the MIT Deshpande Center for Technological Innovation, the MIT-Pillar AI Collective will center on the market discovery process, advancing projects through market research, customer discovery, and prototyping. Graduate students and postdocs will aim to emerge from the program having built minimum viable products, with support from Pillar VC and experienced industry leaders.

    “We are grateful for this support from Pillar VC and to join forces to converge the commercialization of translational research in AI, data science, and machine learning, with an emphasis on identifying and cultivating prospective entrepreneurs,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “Pillar’s focus on mentorship for our graduate students and postdoctoral researchers, and centering the program within the Deshpande Center, will undoubtedly foster big ideas in AI and create an environment for prospective companies to launch and thrive.” 

    Founded by Jamie Goldstein ’89, Pillar VC is committed to growing companies and investing in personal and professional development, coaching, and community.

    “Many of the most promising companies of the future are living at MIT in the form of transformational research in the fields of data science, AI, and machine learning,” says Goldstein. “We’re honored by the chance to help unlock this potential and catalyze a new generation of founders by surrounding students and postdoctoral researchers with the resources and mentorship they need to move from the lab to industry.”

    The program will launch with the 2022-23 academic year. Grants will be open only to MIT faculty and students, with an emphasis on funding for graduate students in their final year, as well as postdocs. Applications must be submitted by MIT employees with principal investigator status. A selection committee composed of three MIT representatives will include Devavrat Shah, faculty director of the Deshpande Center, the Andrew (1956) and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society; the chair of the selection committee; and a representative from the MIT Schwarzman College of Computing. The committee will also include representation from Pillar VC. Funding will be provided for up to nine research teams.

    “The Deshpande Center will serve as the perfect home for the new collective, given its focus on moving innovative technologies from the lab to the marketplace in the form of breakthrough products and new companies,” adds Chandrakasan. 

    “The Deshpande Center has a 20-year history of guiding new technologies toward commercialization, where they can have a greater impact,” says Shah. “This new collective will help the center expand its own impact by helping more projects realize their market potential and providing more support to researchers in the fast-growing fields of AI, machine learning, and data science.” More

  • in

    Q&A: Global challenges surrounding the deployment of AI

    The AI Policy Forum (AIPF) is an initiative of the MIT Schwarzman College of Computing to move the global conversation about the impact of artificial intelligence from principles to practical policy implementation. Formed in late 2020, AIPF brings together leaders in government, business, and academia to develop approaches to address the societal challenges posed by the rapid advances and increasing applicability of AI.

    The co-chairs of the AI Policy Forum are Aleksander Madry, the Cadence Design Systems Professor; Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science; and Luis Videgaray, senior lecturer at MIT Sloan School of Management and director of MIT AI Policy for the World Project. Here, they discuss talk some of the key issues facing the AI policy landscape today and the challenges surrounding the deployment of AI. The three are co-organizers of the upcoming AI Policy Forum Summit on Sept. 28, which will further explore the issues discussed here.

    Q: Can you talk about the ­ongoing work of the AI Policy Forum and the AI policy landscape generally?

    Ozdaglar: There is no shortage of discussion about AI at different venues, but conversations are often high-level, focused on questions of ethics and principles, or on policy problems alone. The approach the AIPF takes to its work is to target specific questions with actionable policy solutions and engage with the stakeholders working directly in these areas. We work “behind the scenes” with smaller focus groups to tackle these challenges and aim to bring visibility to some potential solutions alongside the players working directly on them through larger gatherings.

    Q: AI impacts many sectors, which makes us naturally worry about its trustworthiness. Are there any emerging best practices for development and deployment of trustworthy AI?

    Madry: The most important thing to understand regarding deploying trustworthy AI is that AI technology isn’t some natural, preordained phenomenon. It is something built by people. People who are making certain design decisions.

    We thus need to advance research that can guide these decisions as well as provide more desirable solutions. But we also need to be deliberate and think carefully about the incentives that drive these decisions. 

    Now, these incentives stem largely from the business considerations, but not exclusively so. That is, we should also recognize that proper laws and regulations, as well as establishing thoughtful industry standards have a big role to play here too.

    Indeed, governments can put in place rules that prioritize the value of deploying AI while being keenly aware of the corresponding downsides, pitfalls, and impossibilities. The design of such rules will be an ongoing and evolving process as the technology continues to improve and change, and we need to adapt to socio-political realities as well.

    Q: Perhaps one of the most rapidly evolving domains in AI deployment is in the financial sector. From a policy perspective, how should governments, regulators, and lawmakers make AI work best for consumers in finance?

    Videgaray: The financial sector is seeing a number of trends that present policy challenges at the intersection of AI systems. For one, there is the issue of explainability. By law (in the U.S. and in many other countries), lenders need to provide explanations to customers when they take actions deleterious in whatever way, like denial of a loan, to a customer’s interest. However, as financial services increasingly rely on automated systems and machine learning models, the capacity of banks to unpack the “black box” of machine learning to provide that level of mandated explanation becomes tenuous. So how should the finance industry and its regulators adapt to this advance in technology? Perhaps we need new standards and expectations, as well as tools to meet these legal requirements.

    Meanwhile, economies of scale and data network effects are leading to a proliferation of AI outsourcing, and more broadly, AI-as-a-service is becoming increasingly common in the finance industry. In particular, we are seeing fintech companies provide the tools for underwriting to other financial institutions — be it large banks or small, local credit unions. What does this segmentation of the supply chain mean for the industry? Who is accountable for the potential problems in AI systems deployed through several layers of outsourcing? How can regulators adapt to guarantee their mandates of financial stability, fairness, and other societal standards?

    Q: Social media is one of the most controversial sectors of the economy, resulting in many societal shifts and disruptions around the world. What policies or reforms might be needed to best ensure social media is a force for public good and not public harm?

    Ozdaglar: The role of social media in society is of growing concern to many, but the nature of these concerns can vary quite a bit — with some seeing social media as not doing enough to prevent, for example, misinformation and extremism, and others seeing it as unduly silencing certain viewpoints. This lack of unified view on what the problem is impacts the capacity to enact any change. All of that is additionally coupled with the complexities of the legal framework in the U.S. spanning the First Amendment, Section 230 of the Communications Decency Act, and trade laws.

    However, these difficulties in regulating social media do not mean that there is nothing to be done. Indeed, regulators have begun to tighten their control over social media companies, both in the United States and abroad, be it through antitrust procedures or other means. In particular, Ofcom in the U.K. and the European Union is already introducing new layers of oversight to platforms. Additionally, some have proposed taxes on online advertising to address the negative externalities caused by current social media business model. So, the policy tools are there, if the political will and proper guidance exists to implement them. More