More stories

  • in

    Joining the battle against health care bias

    Medical researchers are awash in a tsunami of clinical data. But we need major changes in how we gather, share, and apply this data to bring its benefits to all, says Leo Anthony Celi, principal research scientist at the MIT Laboratory for Computational Physiology (LCP). 

    One key change is to make clinical data of all kinds openly available, with the proper privacy safeguards, says Celi, a practicing intensive care unit (ICU) physician at the Beth Israel Deaconess Medical Center (BIDMC) in Boston. Another key is to fully exploit these open data with multidisciplinary collaborations among clinicians, academic investigators, and industry. A third key is to focus on the varying needs of populations across every country, and to empower the experts there to drive advances in treatment, says Celi, who is also an associate professor at Harvard Medical School. 

    In all of this work, researchers must actively seek to overcome the perennial problem of bias in understanding and applying medical knowledge. This deeply damaging problem is only heightened with the massive onslaught of machine learning and other artificial intelligence technologies. “Computers will pick up all our unconscious, implicit biases when we make decisions,” Celi warns.

    Play video

    Sharing medical data 

    Founded by the LCP, the MIT Critical Data consortium builds communities across disciplines to leverage the data that are routinely collected in the process of ICU care to understand health and disease better. “We connect people and align incentives,” Celi says. “In order to advance, hospitals need to work with universities, who need to work with industry partners, who need access to clinicians and data.” 

    The consortium’s flagship project is the MIMIC (medical information marked for intensive care) ICU database built at BIDMC. With about 35,000 users around the world, the MIMIC cohort is the most widely analyzed in critical care medicine. 

    International collaborations such as MIMIC highlight one of the biggest obstacles in health care: most clinical research is performed in rich countries, typically with most clinical trial participants being white males. “The findings of these trials are translated into treatment recommendations for every patient around the world,” says Celi. “We think that this is a major contributor to the sub-optimal outcomes that we see in the treatment of all sorts of diseases in Africa, in Asia, in Latin America.” 

    To fix this problem, “groups who are disproportionately burdened by disease should be setting the research agenda,” Celi says. 

    That’s the rule in the “datathons” (health hackathons) that MIT Critical Data has organized in more than two dozen countries, which apply the latest data science techniques to real-world health data. At the datathons, MIT students and faculty both learn from local experts and share their own skill sets. Many of these several-day events are sponsored by the MIT Industrial Liaison Program, the MIT International Science and Technology Initiatives program, or the MIT Sloan Latin America Office. 

    Datathons are typically held in that country’s national language or dialect, rather than English, with representation from academia, industry, government, and other stakeholders. Doctors, nurses, pharmacists, and social workers join up with computer science, engineering, and humanities students to brainstorm and analyze potential solutions. “They need each other’s expertise to fully leverage and discover and validate the knowledge that is encrypted in the data, and that will be translated into the way they deliver care,” says Celi. 

    “Everywhere we go, there is incredible talent that is completely capable of designing solutions to their health-care problems,” he emphasizes. The datathons aim to further empower the professionals and students in the host countries to drive medical research, innovation, and entrepreneurship.

    Play video

    Fighting built-in bias 

    Applying machine learning and other advanced data science techniques to medical data reveals that “bias exists in the data in unimaginable ways” in every type of health product, Celi says. Often this bias is rooted in the clinical trials required to approve medical devices and therapies. 

    One dramatic example comes from pulse oximeters, which provide readouts on oxygen levels in a patient’s blood. It turns out that these devices overestimate oxygen levels for people of color. “We have been under-treating individuals of color because the nurses and the doctors have been falsely assured that their patients have adequate oxygenation,” he says. “We think that we have harmed, if not killed, a lot of individuals in the past, especially during Covid, as a result of a technology that was not designed with inclusive test subjects.” 

    Such dangers only increase as the universe of medical data expands. “The data that we have available now for research is maybe two or three levels of magnitude more than what we had even 10 years ago,” Celi says. MIMIC, for example, now includes terabytes of X-ray, echocardiogram, and electrocardiogram data, all linked with related health records. Such enormous sets of data allow investigators to detect health patterns that were previously invisible. 

    “But there is a caveat,” Celi says. “It is trivial for computers to learn sensitive attributes that are not very obvious to human experts.” In a study released last year, for instance, he and his colleagues showed that algorithms can tell if a chest X-ray image belongs to a white patient or person of color, even without looking at any other clinical data. 

    “More concerningly, groups including ours have demonstrated that computers can learn easily if you’re rich or poor, just from your imaging alone,” Celi says. “We were able to train a computer to predict if you are on Medicaid, or if you have private insurance, if you feed them with chest X-rays without any abnormality. So again, computers are catching features that are not visible to the human eye.” And these features may lead algorithms to advise against therapies for people who are Black or poor, he says. 

    Opening up industry opportunities 

    Every stakeholder stands to benefit when pharmaceutical firms and other health-care corporations better understand societal needs and can target their treatments appropriately, Celi says. 

    “We need to bring to the table the vendors of electronic health records and the medical device manufacturers, as well as the pharmaceutical companies,” he explains. “They need to be more aware of the disparities in the way that they perform their research. They need to have more investigators representing underrepresented groups of people, to provide that lens to come up with better designs of health products.” 

    Corporations could benefit by sharing results from their clinical trials, and could immediately see these potential benefits by participating in datathons, Celi says. “They could really witness the magic that happens when that data is curated and analyzed by students and clinicians with different backgrounds from different countries. So we’re calling out our partners in the pharmaceutical industry to organize these events with us!”  More

  • in

    Success at the intersection of technology and finance

    Citadel founder and CEO Ken Griffin had some free advice for an at-capacity crowd of MIT students at the Wong Auditorium during a campus visit in April. “If you find yourself in a career where you’re not learning,” he told them, “it’s time to change jobs. In this world, if you’re not learning, you can find yourself irrelevant in the blink of an eye.”

    During a conversation with Bryan Landman ’11, senior quantitative research lead for Citadel’s Global Quantitative Strategies business, Griffin reflected on his career and offered predictions for the impact of technology on the finance sector. Citadel, which he launched in 1990, is now one of the world’s leading investment firms. Griffin also serves as non-executive chair of Citadel Securities, a market maker that is known as a key player in the modernization of markets and market structures.

    “We are excited to hear Ken share his perspective on how technology continues to shape the future of finance, including the emerging trends of quantum computing and AI,” said David Schmittlein, the John C Head III Dean and professor of marketing at MIT Sloan School of Management, who kicked off the program. The presentation was jointly sponsored by MIT Sloan, the MIT Schwarzman College of Computing, the School of Engineering, MIT Career Advising and Professional Development, and Citadel Securities Campus Recruiting.

    The future, in Griffin’s view, “is all about the application of engineering, software, and mathematics to markets. Successful entrepreneurs are those who have the tools to solve the unsolved problems of that moment in time.” He launched Citadel only one year after graduating from college. “History so far has been kind to the vision I had back in the late ’80s,” he said.

    Griffin realized very early in his career “that you could use a personal computer and quantitative finance to price traded securities in a way that was much more advanced than you saw on your typical equity trading desk on Wall Street.” Both businesses, he told the audience, are ultimately driven by research. “That’s where we formulate the ideas, and trading is how we monetize that research.”

    It’s also why Citadel and Citadel Securities employ several hundred software engineers. “We have a huge investment today in using modern technology to power our decision-making and trading,” said Griffin.

    One example of Citadel’s application of technology and science is the firm’s hiring of a meteorological team to expand the weather analytics expertise within its commodities business. While power supply is relatively easy to map and analyze, predicting demand is much more difficult. Citadel’s weather team feeds forecast data obtained from supercomputers to its traders. “Wind and solar are huge commodities,” Griffin explained, noting that the days with highest demand in the power market are cloudy, cold days with no wind. When you can forecast those days better than the market as a whole, that’s where you can identify opportunities, he added.

    Pros and cons of machine learning

    Asking about the impact of new technology on their sector, Landman noted that both Citadel and Citadel Securities are already leveraging machine learning. “In the market-making business,” Griffin said, “you see a real application for machine learning because you have so much data to parametrize the models with. But when you get into longer time horizon problems, machine learning starts to break down.”

    Griffin noted that the data obtained through machine learning is most helpful for investments with short time horizons, such as in its quantitative strategies business. “In our fundamental equities business,” he said, “machine learning is not as helpful as you would want because the underlying systems are not stationary.”

    Griffin was emphatic that “there has been a moment in time where being a really good statistician or really understanding machine-learning models was sufficient to make money. That won’t be the case for much longer.” One of the guiding principles at Citadel, he and Landman agreed, was that machine learning and other methodologies should not be used blindly. Each analyst has to cite the underlying economic theory driving their argument on investment decisions. “If you understand the problem in a different way than people who are just using the statistical models,” he said, “you have a real chance for a competitive advantage.”

    ChatGPT and a seismic shift

    Asked if ChatGPT will change history, Griffin predicted that the rise of capabilities in large language models will transform a substantial number of white collar jobs. “With open AI for most routine commercial legal documents, ChatGPT will do a better job writing a lease than a young lawyer. This is the first time we are seeing traditionally white-collar jobs at risk due to technology, and that’s a sea change.”

    Griffin urged MIT students to work with the smartest people they can find, as he did: “The magic of Citadel has been a testament to the idea that by surrounding yourself with bright, ambitious people, you can accomplish something special. I went to great lengths to hire the brightest people I could find and gave them responsibility and trust early in their careers.”

    Even more critical to success is the willingness to advocate for oneself, Griffin said, using Gerald Beeson, Citadel’s chief operating officer, as an example. Beeson, who started as an intern at the firm, “consistently sought more responsibility and had the foresight to train his own successors.” Urging students to take ownership of their careers, Griffin advised: “Make it clear that you’re willing to take on more responsibility, and think about what the roadblocks will be.”

    When microphones were handed to the audience, students inquired what changes Griffin would like to see in the hedge fund industry, how Citadel assesses the risk and reward of potential projects, and whether hedge funds should give back to the open source community. Asked about the role that Citadel — and its CEO — should play in “the wider society,” Griffin spoke enthusiastically of his belief in participatory democracy. “We need better people on both sides of the aisle,” he said. “I encourage all my colleagues to be politically active. It’s unfortunate when firms shut down political dialogue; we actually embrace it.”

    Closing on an optimistic note, Griffin urged the students in the audience to go after success, declaring, “The world is always awash in challenge and its shortcomings, but no matter what anybody says, you live at the greatest moment in the history of the planet. Make the most of it.” More

  • in

    Study: AI models fail to reproduce human judgements about rule violations

    In an effort to improve fairness or reduce backlogs, machine-learning models are sometimes designed to mimic human decision making, such as deciding whether social media posts violate toxic content policies.

    But researchers from MIT and elsewhere have found that these models often do not replicate human decisions about rule violations. If models are not trained with the right data, they are likely to make different, often harsher judgements than humans would.

    In this case, the “right” data are those that have been labeled by humans who were explicitly asked whether items defy a certain rule. Training involves showing a machine-learning model millions of examples of this “normative data” so it can learn a task.

    But data used to train machine-learning models are typically labeled descriptively — meaning humans are asked to identify factual features, such as, say, the presence of fried food in a photo. If “descriptive data” are used to train models that judge rule violations, such as whether a meal violates a school policy that prohibits fried food, the models tend to over-predict rule violations.

    This drop in accuracy could have serious implications in the real world. For instance, if a descriptive model is used to make decisions about whether an individual is likely to reoffend, the researchers’ findings suggest it may cast stricter judgements than a human would, which could lead to higher bail amounts or longer criminal sentences.

    “I think most artificial intelligence/machine-learning researchers assume that the human judgements in data and labels are biased, but this result is saying something worse. These models are not even reproducing already-biased human judgments because the data they’re being trained on has a flaw: Humans would label the features of images and text differently if they knew those features would be used for a judgment. This has huge ramifications for machine learning systems in human processes,” says Marzyeh Ghassemi, an assistant professor and head of the Healthy ML Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

    Ghassemi is senior author of a new paper detailing these findings, which was published today in Science Advances. Joining her on the paper are lead author Aparna Balagopalan, an electrical engineering and computer science graduate student; David Madras, a graduate student at the University of Toronto; David H. Yang, a former graduate student who is now co-founder of ML Estimation; Dylan Hadfield-Menell, an MIT assistant professor; and Gillian K. Hadfield, Schwartz Reisman Chair in Technology and Society and professor of law at the University of Toronto.

    Labeling discrepancy

    This study grew out of a different project that explored how a machine-learning model can justify its predictions. As they gathered data for that study, the researchers noticed that humans sometimes give different answers if they are asked to provide descriptive or normative labels about the same data.

    To gather descriptive labels, researchers ask labelers to identify factual features — does this text contain obscene language? To gather normative labels, researchers give labelers a rule and ask if the data violates that rule — does this text violate the platform’s explicit language policy?

    Surprised by this finding, the researchers launched a user study to dig deeper. They gathered four datasets to mimic different policies, such as a dataset of dog images that could be in violation of an apartment’s rule against aggressive breeds. Then they asked groups of participants to provide descriptive or normative labels.

    In each case, the descriptive labelers were asked to indicate whether three factual features were present in the image or text, such as whether the dog appears aggressive. Their responses were then used to craft judgements. (If a user said a photo contained an aggressive dog, then the policy was violated.) The labelers did not know the pet policy. On the other hand, normative labelers were given the policy prohibiting aggressive dogs, and then asked whether it had been violated by each image, and why.

    The researchers found that humans were significantly more likely to label an object as a violation in the descriptive setting. The disparity, which they computed using the absolute difference in labels on average, ranged from 8 percent on a dataset of images used to judge dress code violations to 20 percent for the dog images.

    “While we didn’t explicitly test why this happens, one hypothesis is that maybe how people think about rule violations is different from how they think about descriptive data. Generally, normative decisions are more lenient,” Balagopalan says.

    Yet data are usually gathered with descriptive labels to train a model for a particular machine-learning task. These data are often repurposed later to train different models that perform normative judgements, like rule violations.

    Training troubles

    To study the potential impacts of repurposing descriptive data, the researchers trained two models to judge rule violations using one of their four data settings. They trained one model using descriptive data and the other using normative data, and then compared their performance.

    They found that if descriptive data are used to train a model, it will underperform a model trained to perform the same judgements using normative data. Specifically, the descriptive model is more likely to misclassify inputs by falsely predicting a rule violation. And the descriptive model’s accuracy was even lower when classifying objects that human labelers disagreed about.

    “This shows that the data do really matter. It is important to match the training context to the deployment context if you are training models to detect if a rule has been violated,” Balagopalan says.

    It can be very difficult for users to determine how data have been gathered; this information can be buried in the appendix of a research paper or not revealed by a private company, Ghassemi says.

    Improving dataset transparency is one way this problem could be mitigated. If researchers know how data were gathered, then they know how those data should be used. Another possible strategy is to fine-tune a descriptively trained model on a small amount of normative data. This idea, known as transfer learning, is something the researchers want to explore in future work.

    They also want to conduct a similar study with expert labelers, like doctors or lawyers, to see if it leads to the same label disparity.

    “The way to fix this is to transparently acknowledge that if we want to reproduce human judgment, we must only use data that were collected in that setting. Otherwise, we are going to end up with systems that are going to have extremely harsh moderations, much harsher than what humans would do. Humans would see nuance or make another distinction, whereas these models don’t,” Ghassemi says.

    This research was funded, in part, by the Schwartz Reisman Institute for Technology and Society, Microsoft Research, the Vector Institute, and a Canada Research Council Chain. More

  • in

    Researchers create a tool for accurately simulating complex systems

    Researchers often use simulations when designing new algorithms, since testing ideas in the real world can be both costly and risky. But since it’s impossible to capture every detail of a complex system in a simulation, they typically collect a small amount of real data that they replay while simulating the components they want to study.

    Known as trace-driven simulation (the small pieces of real data are called traces), this method sometimes results in biased outcomes. This means researchers might unknowingly choose an algorithm that is not the best one they evaluated, and which will perform worse on real data than the simulation predicted that it should.

    MIT researchers have developed a new method that eliminates this source of bias in trace-driven simulation. By enabling unbiased trace-driven simulations, the new technique could help researchers design better algorithms for a variety of applications, including improving video quality on the internet and increasing the performance of data processing systems.

    The researchers’ machine-learning algorithm draws on the principles of causality to learn how the data traces were affected by the behavior of the system. In this way, they can replay the correct, unbiased version of the trace during the simulation.

    When compared to a previously developed trace-driven simulator, the researchers’ simulation method correctly predicted which newly designed algorithm would be best for video streaming — meaning the one that led to less rebuffering and higher visual quality. Existing simulators that do not account for bias would have pointed researchers to a worse-performing algorithm.

    “Data are not the only thing that matter. The story behind how the data are generated and collected is also important. If you want to answer a counterfactual question, you need to know the underlying data generation story so you only intervene on those things that you really want to simulate,” says Arash Nasr-Esfahany, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper on this new technique.

    He is joined on the paper by co-lead authors and fellow EECS graduate students Abdullah Alomar and Pouya Hamadanian; recent graduate student Anish Agarwal PhD ’21; and senior authors Mohammad Alizadeh, an associate professor of electrical engineering and computer science; and Devavrat Shah, the Andrew and Erna Viterbi Professor in EECS and a member of the Institute for Data, Systems, and Society and of the Laboratory for Information and Decision Systems. The research was recently presented at the USENIX Symposium on Networked Systems Design and Implementation.

    Specious simulations

    The MIT researchers studied trace-driven simulation in the context of video streaming applications.

    In video streaming, an adaptive bitrate algorithm continually decides the video quality, or bitrate, to transfer to a device based on real-time data on the user’s bandwidth. To test how different adaptive bitrate algorithms impact network performance, researchers can collect real data from users during a video stream for a trace-driven simulation.

    They use these traces to simulate what would have happened to network performance had the platform used a different adaptive bitrate algorithm in the same underlying conditions.

    Researchers have traditionally assumed that trace data are exogenous, meaning they aren’t affected by factors that are changed during the simulation. They would assume that, during the period when they collected the network performance data, the choices the bitrate adaptation algorithm made did not affect those data.

    But this is often a false assumption that results in biases about the behavior of new algorithms, making the simulation invalid, Alizadeh explains.

    “We recognized, and others have recognized, that this way of doing simulation can induce errors. But I don’t think people necessarily knew how significant those errors could be,” he says.

    To develop a solution, Alizadeh and his collaborators framed the issue as a causal inference problem. To collect an unbiased trace, one must understand the different causes that affect the observed data. Some causes are intrinsic to a system, while others are affected by the actions being taken.

    In the video streaming example, network performance is affected by the choices the bitrate adaptation algorithm made — but it’s also affected by intrinsic elements, like network capacity.

    “Our task is to disentangle these two effects, to try to understand what aspects of the behavior we are seeing are intrinsic to the system and how much of what we are observing is based on the actions that were taken. If we can disentangle these two effects, then we can do unbiased simulations,” he says.

    Learning from data

    But researchers often cannot directly observe intrinsic properties. This is where the new tool, called CausalSim, comes in. The algorithm can learn the underlying characteristics of a system using only the trace data.

    CausalSim takes trace data that were collected through a randomized control trial, and estimates the underlying functions that produced those data. The model tells the researchers, under the exact same underlying conditions that a user experienced, how a new algorithm would change the outcome.

    Using a typical trace-driven simulator, bias might lead a researcher to select a worse-performing algorithm, even though the simulation indicates it should be better. CausalSim helps researchers select the best algorithm that was tested.

    The MIT researchers observed this in practice. When they used CausalSim to design an improved bitrate adaptation algorithm, it led them to select a new variant that had a stall rate that was nearly 1.4 times lower than a well-accepted competing algorithm, while achieving the same video quality. The stall rate is the amount of time a user spent rebuffering the video.

    By contrast, an expert-designed trace-driven simulator predicted the opposite. It indicated that this new variant should cause a stall rate that was nearly 1.3 times higher. The researchers tested the algorithm on real-world video streaming and confirmed that CausalSim was correct.

    “The gains we were getting in the new variant were very close to CausalSim’s prediction, while the expert simulator was way off. This is really exciting because this expert-designed simulator has been used in research for the past decade. If CausalSim can so clearly be better than this, who knows what we can do with it?” says Hamadanian.

    During a 10-month experiment, CausalSim consistently improved simulation accuracy, resulting in algorithms that made about half as many errors as those designed using baseline methods.

    In the future, the researchers want to apply CausalSim to situations where randomized control trial data are not available or where it is especially difficult to recover the causal dynamics of the system. They also want to explore how to design and monitor systems to make them more amenable to causal analysis. More

  • in

    Researchers develop novel AI-based estimator for manufacturing medicine

    When medical companies manufacture the pills and tablets that treat any number of illnesses, aches, and pains, they need to isolate the active pharmaceutical ingredient from a suspension and dry it. The process requires a human operator to monitor an industrial dryer, agitate the material, and watch for the compound to take on the right qualities for compressing into medicine. The job depends heavily on the operator’s observations.   

    Methods for making that process less subjective and a lot more efficient are the subject of a recent Nature Communications paper authored by researchers at MIT and Takeda. The paper’s authors devise a way to use physics and machine learning to categorize the rough surfaces that characterize particles in a mixture. The technique, which uses a physics-enhanced autocorrelation-based estimator (PEACE), could change pharmaceutical manufacturing processes for pills and powders, increasing efficiency and accuracy and resulting in fewer failed batches of pharmaceutical products.  

    “Failed batches or failed steps in the pharmaceutical process are very serious,” says Allan Myerson, a professor of practice in the MIT Department of Chemical Engineering and one of the study’s authors. “Anything that improves the reliability of the pharmaceutical manufacturing, reduces time, and improves compliance is a big deal.”

    The team’s work is part of an ongoing collaboration between Takeda and MIT, launched in 2020. The MIT-Takeda Program aims to leverage the experience of both MIT and Takeda to solve problems at the intersection of medicine, artificial intelligence, and health care.

    In pharmaceutical manufacturing, determining whether a compound is adequately mixed and dried ordinarily requires stopping an industrial-sized dryer and taking samples off the manufacturing line for testing. Researchers at Takeda thought artificial intelligence could improve the task and reduce stoppages that slow down production. Originally the research team planned to use videos to train a computer model to replace a human operator. But determining which videos to use to train the model still proved too subjective. Instead, the MIT-Takeda team decided to illuminate particles with a laser during filtration and drying, and measure particle size distribution using physics and machine learning. 

    “We just shine a laser beam on top of this drying surface and observe,” says Qihang Zhang, a doctoral student in MIT’s Department of Electrical Engineering and Computer Science and the study’s first author. 

    Play video

    A physics-derived equation describes the interaction between the laser and the mixture, while machine learning characterizes the particle sizes. The process doesn’t require stopping and starting the process, which means the entire job is more secure and more efficient than standard operating procedure, according to George Barbastathis, professor of mechanical engineering at MIT and corresponding author of the study.

    The machine learning algorithm also does not require many datasets to learn its job, because the physics allows for speedy training of the neural network.

    “We utilize the physics to compensate for the lack of training data, so that we can train the neural network in an efficient way,” says Zhang. “Only a tiny amount of experimental data is enough to get a good result.”

    Today, the only inline processes used for particle measurements in the pharmaceutical industry are for slurry products, where crystals float in a liquid. There is no method for measuring particles within a powder during mixing. Powders can be made from slurries, but when a liquid is filtered and dried its composition changes, requiring new measurements. In addition to making the process quicker and more efficient, using the PEACE mechanism makes the job safer because it requires less handling of potentially highly potent materials, the authors say. 

    The ramifications for pharmaceutical manufacturing could be significant, allowing drug production to be more efficient, sustainable, and cost-effective, by reducing the number of experiments companies need to conduct when making products. Monitoring the characteristics of a drying mixture is an issue the industry has long struggled with, according to Charles Papageorgiou, the director of Takeda’s Process Chemistry Development group and one of the study’s authors. 

    “It is a problem that a lot of people are trying to solve, and there isn’t a good sensor out there,” says Papageorgiou. “This is a pretty big step change, I think, with respect to being able to monitor, in real time, particle size distribution.”

    Papageorgiou said that the mechanism could have applications in other industrial pharmaceutical operations. At some point, the laser technology may be able to train video imaging, allowing manufacturers to use a camera for analysis rather than laser measurements. The company is now working to assess the tool on different compounds in its lab. 

    The results come directly from collaboration between Takeda and three MIT departments: Mechanical Engineering, Chemical Engineering, and Electrical Engineering and Computer Science. Over the last three years, researchers at MIT and Takeda have worked together on 19 projects focused on applying machine learning and artificial intelligence to problems in the health-care and medical industry as part of the MIT-Takeda Program. 

    Often, it can take years for academic research to translate to industrial processes. But researchers are hopeful that direct collaboration could shorten that timeline. Takeda is a walking distance away from MIT’s campus, which allowed researchers to set up tests in the company’s lab, and real-time feedback from Takeda helped MIT researchers structure their research based on the company’s equipment and operations. 

    Combining the expertise and mission of both entities helps researchers ensure their experimental results will have real-world implications. The team has already filed for two patents and has plans to file for a third.   More

  • in

    Martin Wainwright named director of the Institute for Data, Systems, and Society

    Martin Wainwright, the Cecil H. Green Professor in MIT’s departments of Electrical Engineering and Computer Science (EECS) and Mathematics, has been named the new director of the Institute for Data, Systems, and Society (IDSS), effective July 1.

    “Martin is a widely recognized leader in statistics and machine learning — both in research and in education. In taking on this leadership role in the college, Martin will work to build up the human and institutional behavior component of IDSS, while strengthening initiatives in both policy and statistics, and collaborations within the institute, across MIT, and beyond,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I look forward to working with him and supporting his efforts in this next chapter for IDSS.”

    “Martin holds a strong belief in the value of theoretical, experimental, and computational approaches to research and in facilitating connections between them. He also places much importance in having practical, as well as academic, impact,” says Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing, department head of EECS, and the MathWorks Professor of Electrical Engineering and Computer Science. “As the new director of IDSS, he will undoubtedly bring these tenets to the role in advancing the mission of IDSS and helping to shape its future.”

    A principal investigator in the Laboratory for Information and Decision Systems and the Statistics and Data Science Center, Wainwright joined the MIT faculty in July 2022 from the University of California at Berkeley, where he held the Howard Friesen Chair with a joint appointment between the departments of Electrical Engineering and Computer Science and Statistics.

    Wainwright received his bachelor’s degree in mathematics from the University of Waterloo, Canada, and doctoral degree in electrical engineering and computer science from MIT. He has received a number of awards and recognition, including an Alfred P. Sloan Foundation Fellowship, and best paper awards from the IEEE Signal Processing Society, IEEE Communications Society, and IEEE Information Theory and Communication Societies. He has also been honored with the Medallion Lectureship and Award from the Institute of Mathematical Statistics, and the COPSS Presidents’ Award from the Joint Statistical Societies. He was a section lecturer with the International Congress of Mathematicians in 2014 and received the Blackwell Award from the Institute of Mathematical Statistics in 2017.

    He is the author of “High-dimensional Statistics: A Non-Asymptotic Viewpoint” (Cambridge University Press, 2019), and is coauthor on several books, including on graphical models and on sparse statistical modeling.

    Wainwright succeeds Munther Dahleh, the William A. Coolidge Professor in EECS, who has helmed IDSS since its founding in 2015.

    “I am grateful to Munther and thank him for his leadership of IDSS. As the founding director, he has led the creation of a remarkable new part of MIT,” says Huttenlocher. More

  • in

    Drones navigate unseen environments with liquid neural networks

    In the vast, expansive skies where birds once ruled supreme, a new crop of aviators is taking flight. These pioneers of the air are not living creatures, but rather a product of deliberate innovation: drones. But these aren’t your typical flying bots, humming around like mechanical bees. Rather, they’re avian-inspired marvels that soar through the sky, guided by liquid neural networks to navigate ever-changing and unseen environments with precision and ease.

    Inspired by the adaptable nature of organic brains, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced a method for robust flight navigation agents to master vision-based fly-to-target tasks in intricate, unfamiliar environments. The liquid neural networks, which can continuously adapt to new data inputs, showed prowess in making reliable decisions in unknown domains like forests, urban landscapes, and environments with added noise, rotation, and occlusion. These adaptable models, which outperformed many state-of-the-art counterparts in navigation tasks, could enable potential real-world drone applications like search and rescue, delivery, and wildlife monitoring.

    The researchers’ recent study, published today in Science Robotics, details how this new breed of agents can adapt to significant distribution shifts, a long-standing challenge in the field. The team’s new class of machine-learning algorithms, however, captures the causal structure of tasks from high-dimensional, unstructured data, such as pixel inputs from a drone-mounted camera. These networks can then extract crucial aspects of a task (i.e., understand the task at hand) and ignore irrelevant features, allowing acquired navigation skills to transfer targets seamlessly to new environments.

    Play video

    Drones navigate unseen environments with liquid neural networks.

    “We are thrilled by the immense potential of our learning-based control approach for robots, as it lays the groundwork for solving problems that arise when training in one environment and deploying in a completely distinct environment without additional training,” says Daniela Rus, CSAIL director and the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT. “Our experiments demonstrate that we can effectively teach a drone to locate an object in a forest during summer, and then deploy the model in winter, with vastly different surroundings, or even in urban settings, with varied tasks such as seeking and following. This adaptability is made possible by the causal underpinnings of our solutions. These flexible algorithms could one day aid in decision-making based on data streams that change over time, such as medical diagnosis and autonomous driving applications.”

    A daunting challenge was at the forefront: Do machine-learning systems understand the task they are given from data when flying drones to an unlabeled object? And, would they be able to transfer their learned skill and task to new environments with drastic changes in scenery, such as flying from a forest to an urban landscape? What’s more, unlike the remarkable abilities of our biological brains, deep learning systems struggle with capturing causality, frequently over-fitting their training data and failing to adapt to new environments or changing conditions. This is especially troubling for resource-limited embedded systems, like aerial drones, that need to traverse varied environments and respond to obstacles instantaneously. 

    The liquid networks, in contrast, offer promising preliminary indications of their capacity to address this crucial weakness in deep learning systems. The team’s system was first trained on data collected by a human pilot, to see how they transferred learned navigation skills to new environments under drastic changes in scenery and conditions. Unlike traditional neural networks that only learn during the training phase, the liquid neural net’s parameters can change over time, making them not only interpretable, but more resilient to unexpected or noisy data. 

    In a series of quadrotor closed-loop control experiments, the drones underwent range tests, stress tests, target rotation and occlusion, hiking with adversaries, triangular loops between objects, and dynamic target tracking. They tracked moving targets, and executed multi-step loops between objects in never-before-seen environments, surpassing performance of other cutting-edge counterparts. 

    The team believes that the ability to learn from limited expert data and understand a given task while generalizing to new environments could make autonomous drone deployment more efficient, cost-effective, and reliable. Liquid neural networks, they noted, could enable autonomous air mobility drones to be used for environmental monitoring, package delivery, autonomous vehicles, and robotic assistants. 

    “The experimental setup presented in our work tests the reasoning capabilities of various deep learning systems in controlled and straightforward scenarios,” says MIT CSAIL Research Affiliate Ramin Hasani. “There is still so much room left for future research and development on more complex reasoning challenges for AI systems in autonomous navigation applications, which has to be tested before we can safely deploy them in our society.”

    “Robust learning and performance in out-of-distribution tasks and scenarios are some of the key problems that machine learning and autonomous robotic systems have to conquer to make further inroads in society-critical applications,” says Alessio Lomuscio, professor of AI safety in the Department of Computing at Imperial College London. “In this context, the performance of liquid neural networks, a novel brain-inspired paradigm developed by the authors at MIT, reported in this study is remarkable. If these results are confirmed in other experiments, the paradigm here developed will contribute to making AI and robotic systems more reliable, robust, and efficient.”

    Clearly, the sky is no longer the limit, but rather a vast playground for the boundless possibilities of these airborne marvels. 

    Hasani and PhD student Makram Chahine; Patrick Kao ’22, MEng ’22; and PhD student Aaron Ray SM ’21 wrote the paper with Ryan Shubert ’20, MEng ’22; MIT postdocs Mathias Lechner and Alexander Amini; and Rus.

    This research was supported, in part, by Schmidt Futures, the U.S. Air Force Research Laboratory, the U.S. Air Force Artificial Intelligence Accelerator, and the Boeing Co. More

  • in

    A method for designing neural networks optimally suited for certain tasks

    Neural networks, a type of machine-learning model, are being used to help humans complete a wide variety of tasks, from predicting if someone’s credit score is high enough to qualify for a loan to diagnosing whether a patient has a certain disease. But researchers still have only a limited understanding of how these models work. Whether a given model is optimal for certain task remains an open question.

    MIT researchers have found some answers. They conducted an analysis of neural networks and proved that they can be designed so they are “optimal,” meaning they minimize the probability of misclassifying borrowers or patients into the wrong category when the networks are given a lot of labeled training data. To achieve optimality, these networks must be built with a specific architecture.

    The researchers discovered that, in certain situations, the building blocks that enable a neural network to be optimal are not the ones developers use in practice. These optimal building blocks, derived through the new analysis, are unconventional and haven’t been considered before, the researchers say.

    In a paper published this week in the Proceedings of the National Academy of Sciences, they describe these optimal building blocks, called activation functions, and show how they can be used to design neural networks that achieve better performance on any dataset. The results hold even as the neural networks grow very large. This work could help developers select the correct activation function, enabling them to build neural networks that classify data more accurately in a wide range of application areas, explains senior author Caroline Uhler, a professor in the Department of Electrical Engineering and Computer Science (EECS).

    “While these are new activation functions that have never been used before, they are simple functions that someone could actually implement for a particular problem. This work really shows the importance of having theoretical proofs. If you go after a principled understanding of these models, that can actually lead you to new activation functions that you would otherwise never have thought of,” says Uhler, who is also co-director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and a researcher at MIT’s Laboratory for Information and Decision Systems (LIDS) and its Institute for Data, Systems and Society (IDSS).

    Joining Uhler on the paper are lead author Adityanarayanan Radhakrishnan, an EECS graduate student and an Eric and Wendy Schmidt Center Fellow, and Mikhail Belkin, a professor in the Halicioğlu Data Science Institute at the University of California at San Diego.

    Activation investigation

    A neural network is a type of machine-learning model that is loosely based on the human brain. Many layers of interconnected nodes, or neurons, process data. Researchers train a network to complete a task by showing it millions of examples from a dataset.

    For instance, a network that has been trained to classify images into categories, say dogs and cats, is given an image that has been encoded as numbers. The network performs a series of complex multiplication operations, layer by layer, until the result is just one number. If that number is positive, the network classifies the image a dog, and if it is negative, a cat.

    Activation functions help the network learn complex patterns in the input data. They do this by applying a transformation to the output of one layer before data are sent to the next layer. When researchers build a neural network, they select one activation function to use. They also choose the width of the network (how many neurons are in each layer) and the depth (how many layers are in the network.)

    “It turns out that, if you take the standard activation functions that people use in practice, and keep increasing the depth of the network, it gives you really terrible performance. We show that if you design with different activation functions, as you get more data, your network will get better and better,” says Radhakrishnan.

    He and his collaborators studied a situation in which a neural network is infinitely deep and wide — which means the network is built by continually adding more layers and more nodes — and is trained to perform classification tasks. In classification, the network learns to place data inputs into separate categories.

    “A clean picture”

    After conducting a detailed analysis, the researchers determined that there are only three ways this kind of network can learn to classify inputs. One method classifies an input based on the majority of inputs in the training data; if there are more dogs than cats, it will decide every new input is a dog. Another method classifies by choosing the label (dog or cat) of the training data point that most resembles the new input.

    The third method classifies a new input based on a weighted average of all the training data points that are similar to it. Their analysis shows that this is the only method of the three that leads to optimal performance. They identified a set of activation functions that always use this optimal classification method.

    “That was one of the most surprising things — no matter what you choose for an activation function, it is just going to be one of these three classifiers. We have formulas that will tell you explicitly which of these three it is going to be. It is a very clean picture,” he says.

    They tested this theory on a several classification benchmarking tasks and found that it led to improved performance in many cases. Neural network builders could use their formulas to select an activation function that yields improved classification performance, Radhakrishnan says.

    In the future, the researchers want to use what they’ve learned to analyze situations where they have a limited amount of data and for networks that are not infinitely wide or deep. They also want to apply this analysis to situations where data do not have labels.

    “In deep learning, we want to build theoretically grounded models so we can reliably deploy them in some mission-critical setting. This is a promising approach at getting toward something like that — building architectures in a theoretically grounded way that translates into better results in practice,” he says.

    This work was supported, in part, by the National Science Foundation, Office of Naval Research, the MIT-IBM Watson AI Lab, the Eric and Wendy Schmidt Center at the Broad Institute, and a Simons Investigator Award. More