More stories

  • in

    Roadmap details how to improve exoplanet exploration using the JWST

    The launch of NASA’s James Webb Space Telescope (JWST) in 2021 kicked off an exciting new era for exoplanet research, especially for scientists looking at terrestrial planets orbiting stars other than our sun. But three years into the telescope’s mission, some scientists have run into challenges that have slowed down progress.In a recent paper published in Nature Astronomy, the TRAPPIST-1 JWST Community Initiative lays out a step-by-step roadmap to overcome the challenges they faced while studying the TRAPPIST-1 system by improving the efficiency of data gathering to benefit the astronomy community at large.“A whole community of experts came together to tackle these complex cross-disciplinary challenges to design the first multiyear observational strategy to give JWST a fighting chance at identifying habitable worlds over its lifetime,” says Julien de Wit, an associate professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) and one of the lead authors of the paper.Two-for-one dealLocated 41 light years from Earth, the TRAPPIST-1 system with its seven planets presents a unique opportunity to study a large system with multiple planets of different compositions, similar to our own solar system.“It’s a dream target: You have not one, but maybe three, planets in the habitable zone, so you have a way to actually compare in the same system,” says René Doyon from the Université de Montréal, who co-led the study with de Wit. “There are only a handful of well-characterized temperate rocky planets for which we can hope to detect their atmosphere, and most of them are within the TRAPPIST-1 system.”Astronomers like de Wit and Doyon study exoplanet atmospheres through a technique called transmission spectroscopy, where they look at the way starlight passes through a planet’s potential atmosphere to see what elements are present. Transmission spectra are collected when the planet passes in front of its host star.The planets within the TRAPPIST system have short orbital periods. As a result, their transits frequently overlap. Transit observation times are usually allotted in five-hour windows, and when scheduled properly, close to half of these can catch at least two transits. This “two-for-one” saves both time and money while doubling data collection.Stellar contaminationStars are not uniform; their surfaces can vary in temperature, creating spots that can be hotter or cooler. Molecules like water vapor can condense in cool spots and interfere with transmission spectra. Stellar information like this can be difficult to disentangle from the planetary signal and give false indications of a planet’s atmospheric composition, creating what’s known as “stellar contamination.” While it has often been ignored, the improved capabilities of the JWST have revealed the challenges stellar contamination introduces when studying planetary atmospheres.EAPS research scientist Ben Rackham ran into these challenges when they derailed his initial PhD research on small exoplanets using the Magellan Telescopes in Chile. He’s now seeing the same problem he first encountered as a graduate student repeating itself with the new JWST data.“As we predicted from that earlier work with data from ground-based telescopes, the very first spectral signatures we’re getting with JWST don’t really make any sense in terms of a planetary interpretation,” he says. “The features are not what we would expect to see, and they change from transit to transit.”Rackham and David Berardo, a postdoc in EAPS, have been working with de Wit on ways to correct for stellar contamination using two different methods: improving models of stellar spectra and using direct observations to derive corrections.“By observing a star as it rotates, we can use the sensitivity of JWST to get a clearer picture of what its surface looks like, allowing for a more accurate measuring of the atmosphere of planets that transit it,” says Berardo. This, combined with studying back-to-back transits as proposed in the roadmap, collects useful data on the star that can be used to filter out stellar contamination from both future studies and past ones.Beyond TRAPPIST-1The current roadmap was born from the efforts of the TRAPPIST JWST Community Initiative to bring together separate programs focused on individual planets, which prevented them from leveraging the optimal transit observation windows.“We understood early on that this effort would ‘take a village’ to avoid the efficiency traps of small observation programs,” says de Wit. “Our hope now is that a large-scale community effort guided by the roadmap can be initiated to yield deliverables at a timely pace.” De Wit hopes that it could result in identifying habitable, or inhabitable, worlds around TRAPPIST-1 within a decade.Both de Wit and Doyon believe that the TRAPPIST-1 system is the best place for conducting fundamental research on exoplanet atmospheres that will extend to studies in other systems. Doyon thinks that “the TRAPPIST-1 system will be useful not only for TRAPPIST-1 itself, but also to learn how to do very precise correction of stellar activity which will be beneficial to many other transmission spectroscopy programs also affected by stellar activity.”“We have within reach fundamental and transforming answers with a clear roadmap to them,” says de Wit. “We just need to follow it diligently.”  More

  • in

    Study: When allocating scarce resources with AI, randomization can improve fairness

    Organizations are increasingly utilizing machine-learning models to allocate scarce resources or opportunities. For instance, such models can help companies screen resumes to choose job interview candidates or aid hospitals in ranking kidney transplant patients based on their likelihood of survival.When deploying a model, users typically strive to ensure its predictions are fair by reducing bias. This often involves techniques like adjusting the features a model uses to make decisions or calibrating the scores it generates.However, researchers from MIT and Northeastern University argue that these fairness methods are not sufficient to address structural injustices and inherent uncertainties. In a new paper, they show how randomizing a model’s decisions in a structured way can improve fairness in certain situations.For example, if multiple companies use the same machine-learning model to rank job interview candidates deterministically — without any randomization — then one deserving individual could be the bottom-ranked candidate for every job, perhaps due to how the model weighs answers provided in an online form. Introducing randomization into a model’s decisions could prevent one worthy person or group from always being denied a scarce resource, like a job interview.Through their analysis, the researchers found that randomization can be especially beneficial when a model’s decisions involve uncertainty or when the same group consistently receives negative decisions.They present a framework one could use to introduce a specific amount of randomization into a model’s decisions by allocating resources through a weighted lottery. This method, which an individual can tailor to fit their situation, can improve fairness without hurting the efficiency or accuracy of a model.“Even if you could make fair predictions, should you be deciding these social allocations of scarce resources or opportunities strictly off scores or rankings? As things scale, and we see more and more opportunities being decided by these algorithms, the inherent uncertainties in these scores can be amplified. We show that fairness may require some sort of randomization,” says Shomik Jain, a graduate student in the Institute for Data, Systems, and Society (IDSS) and lead author of the paper.Jain is joined on the paper by Kathleen Creel, assistant professor of philosophy and computer science at Northeastern University; and senior author Ashia Wilson, the Lister Brothers Career Development Professor in the Department of Electrical Engineering and Computer Science and a principal investigator in the Laboratory for Information and Decision Systems (LIDS). The research will be presented at the International Conference on Machine Learning.Considering claimsThis work builds off a previous paper in which the researchers explored harms that can occur when one uses deterministic systems at scale. They found that using a machine-learning model to deterministically allocate resources can amplify inequalities that exist in training data, which can reinforce bias and systemic inequality. “Randomization is a very useful concept in statistics, and to our delight, satisfies the fairness demands coming from both a systemic and individual point of view,” Wilson says.In this paper, they explored the question of when randomization can improve fairness. They framed their analysis around the ideas of philosopher John Broome, who wrote about the value of using lotteries to award scarce resources in a way that honors all claims of individuals.A person’s claim to a scarce resource, like a kidney transplant, can stem from merit, deservingness, or need. For instance, everyone has a right to life, and their claims on a kidney transplant may stem from that right, Wilson explains.“When you acknowledge that people have different claims to these scarce resources, fairness is going to require that we respect all claims of individuals. If we always give someone with a stronger claim the resource, is that fair?” Jain says.That sort of deterministic allocation could cause systemic exclusion or exacerbate patterned inequality, which occurs when receiving one allocation increases an individual’s likelihood of receiving future allocations. In addition, machine-learning models can make mistakes, and a deterministic approach could cause the same mistake to be repeated.Randomization can overcome these problems, but that doesn’t mean all decisions a model makes should be randomized equally.Structured randomizationThe researchers use a weighted lottery to adjust the level of randomization based on the amount of uncertainty involved in the model’s decision-making. A decision that is less certain should incorporate more randomization.“In kidney allocation, usually the planning is around projected lifespan, and that is deeply uncertain. If two patients are only five years apart, it becomes a lot harder to measure. We want to leverage that level of uncertainty to tailor the randomization,” Wilson says.The researchers used statistical uncertainty quantification methods to determine how much randomization is needed in different situations. They show that calibrated randomization can lead to fairer outcomes for individuals without significantly affecting the utility, or effectiveness, of the model.“There is a balance to be had between overall utility and respecting the rights of the individuals who are receiving a scarce resource, but oftentimes the tradeoff is relatively small,” says Wilson.However, the researchers emphasize there are situations where randomizing decisions would not improve fairness and could harm individuals, such as in criminal justice contexts.But there could be other areas where randomization can improve fairness, such as college admissions, and the researchers plan to study other use cases in future work. They also want to explore how randomization can affect other factors, such as competition or prices, and how it could be used to improve the robustness of machine-learning models.“We are hoping our paper is a first move toward illustrating that there might be a benefit to randomization. We are offering randomization as a tool. How much you are going to want to do it is going to be up to all the stakeholders in the allocation to decide. And, of course, how they decide is another research question all together,” says Wilson. More

  • in

    AI model identifies certain breast tumor stages likely to progress to invasive cancer

    Ductal carcinoma in situ (DCIS) is a type of preinvasive tumor that sometimes progresses to a highly deadly form of breast cancer. It accounts for about 25 percent of all breast cancer diagnoses.Because it is difficult for clinicians to determine the type and stage of DCIS, patients with DCIS are often overtreated. To address this, an interdisciplinary team of researchers from MIT and ETH Zurich developed an AI model that can identify the different stages of DCIS from a cheap and easy-to-obtain breast tissue image. Their model shows that both the state and arrangement of cells in a tissue sample are important for determining the stage of DCIS.Because such tissue images are so easy to obtain, the researchers were able to build one of the largest datasets of its kind, which they used to train and test their model. When they compared its predictions to conclusions of a pathologist, they found clear agreement in many instances.In the future, the model could be used as a tool to help clinicians streamline the diagnosis of simpler cases without the need for labor-intensive tests, giving them more time to evaluate cases where it is less clear if DCIS will become invasive.“We took the first step in understanding that we should be looking at the spatial organization of cells when diagnosing DCIS, and now we have developed a technique that is scalable. From here, we really need a prospective study. Working with a hospital and getting this all the way to the clinic will be an important step forward,” says Caroline Uhler, a professor in the Department of Electrical Engineering and Computer Science (EECS) and the Institute for Data, Systems, and Society (IDSS), who is also director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard and a researcher at MIT’s Laboratory for Information and Decision Systems (LIDS).Uhler, co-corresponding author of a paper on this research, is joined by lead author Xinyi Zhang, a graduate student in EECS and the Eric and Wendy Schmidt Center; co-corresponding author GV Shivashankar, professor of mechogenomics at ETH Zurich jointly with the Paul Scherrer Institute; and others at MIT, ETH Zurich, and the University of Palermo in Italy. The open-access research was published July 20 in Nature Communications.Combining imaging with AI   Between 30 and 50 percent of patients with DCIS develop a highly invasive stage of cancer, but researchers don’t know the biomarkers that could tell a clinician which tumors will progress.Researchers can use techniques like multiplexed staining or single-cell RNA sequencing to determine the stage of DCIS in tissue samples. However, these tests are too expensive to be performed widely, Shivashankar explains.In previous work, these researchers showed that a cheap imagining technique known as chromatin staining could be as informative as the much costlier single-cell RNA sequencing.For this research, they hypothesized that combining this single stain with a carefully designed machine-learning model could provide the same information about cancer stage as costlier techniques.First, they created a dataset containing 560 tissue sample images from 122 patients at three different stages of disease. They used this dataset to train an AI model that learns a representation of the state of each cell in a tissue sample image, which it uses to infer the stage of a patient’s cancer.However, not every cell is indicative of cancer, so the researchers had to aggregate them in a meaningful way.They designed the model to create clusters of cells in similar states, identifying eight states that are important markers of DCIS. Some cell states are more indicative of invasive cancer than others. The model determines the proportion of cells in each state in a tissue sample.Organization matters“But in cancer, the organization of cells also changes. We found that just having the proportions of cells in every state is not enough. You also need to understand how the cells are organized,” says Shivashankar.With this insight, they designed the model to consider proportion and arrangement of cell states, which significantly boosted its accuracy.“The interesting thing for us was seeing how much spatial organization matters. Previous studies had shown that cells which are close to the breast duct are important. But it is also important to consider which cells are close to which other cells,” says Zhang.When they compared the results of their model with samples evaluated by a pathologist, it had clear agreement in many instances. In cases that were not as clear-cut, the model could provide information about features in a tissue sample, like the organization of cells, that a pathologist could use in decision-making.This versatile model could also be adapted for use in other types of cancer, or even neurodegenerative conditions, which is one area the researchers are also currently exploring.“We have shown that, with the right AI techniques, this simple stain can be very powerful. There is still much more research to do, but we need to take the organization of cells into account in more of our studies,” Uhler says.This research was funded, in part, by the Eric and Wendy Schmidt Center at the Broad Institute, ETH Zurich, the Paul Scherrer Institute, the Swiss National Science Foundation, the U.S. National Institutes of Health, the U.S. Office of Naval Research, the MIT Jameel Clinic for Machine Learning and Health, the MIT-IBM Watson AI Lab, and a Simons Investigator Award. More

  • in

    How to assess a general-purpose AI model’s reliability before it’s deployed

    Foundation models are massive deep-learning models that have been pretrained on an enormous amount of general-purpose, unlabeled data. They can be applied to a variety of tasks, like generating images or answering customer questions.But these models, which serve as the backbone for powerful artificial intelligence tools like ChatGPT and DALL-E, can offer up incorrect or misleading information. In a safety-critical situation, such as a pedestrian approaching a self-driving car, these mistakes could have serious consequences.To help prevent such mistakes, researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task.They do this by considering a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable.When they compared their technique to state-of-the-art baseline methods, it was better at capturing the reliability of foundation models on a variety of downstream classification tasks.Someone could use this technique to decide if a model should be applied in a certain setting, without the need to test it on a real-world dataset. This could be especially useful when datasets may not be accessible due to privacy concerns, like in health care settings. In addition, the technique could be used to rank models based on reliability scores, enabling a user to select the best one for their task.“All models can be wrong, but models that know when they are wrong are more useful. The problem of quantifying uncertainty or reliability is more challenging for these foundation models because their abstract representations are difficult to compare. Our method allows one to quantify how reliable a representation model is for any given input data,” says senior author Navid Azizan, the Esther and Harold E. Edgerton Assistant Professor in the MIT Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS).He is joined on a paper about the work by lead author Young-Jin Park, a LIDS graduate student; Hao Wang, a research scientist at the MIT-IBM Watson AI Lab; and Shervin Ardeshir, a senior research scientist at Netflix. The paper will be presented at the Conference on Uncertainty in Artificial Intelligence.Measuring consensusTraditional machine-learning models are trained to perform a specific task. These models typically make a concrete prediction based on an input. For instance, the model might tell you whether a certain image contains a cat or a dog. In this case, assessing reliability could be a matter of looking at the final prediction to see if the model is right.But foundation models are different. The model is pretrained using general data, in a setting where its creators don’t know all downstream tasks it will be applied to. Users adapt it to their specific tasks after it has already been trained.Unlike traditional machine-learning models, foundation models don’t give concrete outputs like “cat” or “dog” labels. Instead, they generate an abstract representation based on an input data point.To assess the reliability of a foundation model, the researchers used an ensemble approach by training several models which share many properties but are slightly different from one another.“Our idea is like measuring the consensus. If all those foundation models are giving consistent representations for any data in our dataset, then we can say this model is reliable,” Park says.But they ran into a problem: How could they compare abstract representations?“These models just output a vector, comprised of some numbers, so we can’t compare them easily,” he adds.They solved this problem using an idea called neighborhood consistency.For their approach, the researchers prepare a set of reliable reference points to test on the ensemble of models. Then, for each model, they investigate the reference points located near that model’s representation of the test point.By looking at the consistency of neighboring points, they can estimate the reliability of the models.Aligning the representationsFoundation models map data points to what is known as a representation space. One way to think about this space is as a sphere. Each model maps similar data points to the same part of its sphere, so images of cats go in one place and images of dogs go in another.But each model would map animals differently in its own sphere, so while cats may be grouped near the South Pole of one sphere, another model could map cats somewhere in the Northern Hemisphere.The researchers use the neighboring points like anchors to align those spheres so they can make the representations comparable. If a data point’s neighbors are consistent across multiple representations, then one should be confident about the reliability of the model’s output for that point.When they tested this approach on a wide range of classification tasks, they found that it was much more consistent than baselines. Plus, it wasn’t tripped up by challenging test points that caused other methods to fail.Moreover, their approach can be used to assess reliability for any input data, so one could evaluate how well a model works for a particular type of individual, such as a patient with certain characteristics.“Even if the models all have average performance overall, from an individual point of view, you’d prefer the one that works best for that individual,” Wang says.However, one limitation comes from the fact that they must train an ensemble of foundation models, which is computationally expensive. In the future, they plan to find more efficient ways to build multiple models, perhaps by using small perturbations of a single model.“With the current trend of using foundational models for their embeddings to support various downstream tasks — from fine-tuning to retrieval augmented generation — the topic of quantifying uncertainty at the representation level is increasingly important, but challenging, as embeddings on their own have no grounding. What matters instead is how embeddings of different inputs are related to one another, an idea that this work neatly captures through the proposed neighborhood consistency score,” says Marco Pavone, an associate professor in the Department of Aeronautics and Astronautics at Stanford University, who was not involved with this work. “This is a promising step towards high quality uncertainty quantifications for embedding models, and I’m excited to see future extensions which can operate without requiring model-ensembling to really enable this approach to scale to foundation-size models.”This work is funded, in part, by the MIT-IBM Watson AI Lab, MathWorks, and Amazon. More

  • in

    Machine learning and the microscope

    With recent advances in imaging, genomics and other technologies, the life sciences are awash in data. If a biologist is studying cells taken from the brain tissue of Alzheimer’s patients, for example, there could be any number of characteristics they want to investigate — a cell’s type, the genes it’s expressing, its location within the tissue, or more. However, while cells can now be probed experimentally using different kinds of measurements simultaneously, when it comes to analyzing the data, scientists usually can only work with one type of measurement at a time.Working with “multimodal” data, as it’s called, requires new computational tools, which is where Xinyi Zhang comes in.The fourth-year MIT PhD student is bridging machine learning and biology to understand fundamental biological principles, especially in areas where conventional methods have hit limitations. Working in the lab of MIT Professor Caroline Uhler in the Department of Electrical Engineering and Computer Science, the Laboratory for Information and Decision Systems, and the Institute for Data, Systems, and Society, and collaborating with researchers at the Eric and Wendy Schmidt Center at the Broad Institute and elsewhere, Zhang has led multiple efforts to build computational frameworks and principles for understanding the regulatory mechanisms of cells.“All of these are small steps toward the end goal of trying to answer how cells work, how tissues and organs work, why they have disease, and why they can sometimes be cured and sometimes not,” Zhang says.The activities Zhang pursues in her down time are no less ambitious. The list of hobbies she has taken up at the Institute include sailing, skiing, ice skating, rock climbing, performing with MIT’s Concert Choir, and flying single-engine planes. (She earned her pilot’s license in November 2022.)“I guess I like to go to places I’ve never been and do things I haven’t done before,” she says with signature understatement.Uhler, her advisor, says that Zhang’s quiet humility leads to a surprise “in every conversation.”“Every time, you learn something like, ‘Okay, so now she’s learning to fly,’” Uhler says. “It’s just amazing. Anything she does, she does for the right reasons. She wants to be good at the things she cares about, which I think is really exciting.”Zhang first became interested in biology as a high school student in Hangzhou, China. She liked that her teachers couldn’t answer her questions in biology class, which led her to see it as the “most interesting” topic to study.Her interest in biology eventually turned into an interest in bioengineering. After her parents, who were middle school teachers, suggested studying in the United States, she majored in the latter alongside electrical engineering and computer science as an undergraduate at the University of California at Berkeley.Zhang was ready to dive straight into MIT’s EECS PhD program after graduating in 2020, but the Covid-19 pandemic delayed her first year. Despite that, in December 2022, she, Uhler, and two other co-authors published a paper in Nature Communications.The groundwork for the paper was laid by Xiao Wang, one of the co-authors. She had previously done work with the Broad Institute in developing a form of spatial cell analysis that combined multiple forms of cell imaging and gene expression for the same cell while also mapping out the cell’s place in the tissue sample it came from — something that had never been done before.This innovation had many potential applications, including enabling new ways of tracking the progression of various diseases, but there was no way to analyze all the multimodal data the method produced. In came Zhang, who became interested in designing a computational method that could.The team focused on chromatin staining as their imaging method of choice, which is relatively cheap but still reveals a great deal of information about cells. The next step was integrating the spatial analysis techniques developed by Wang, and to do that, Zhang began designing an autoencoder.Autoencoders are a type of neural network that typically encodes and shrinks large amounts of high-dimensional data, then expand the transformed data back to its original size. In this case, Zhang’s autoencoder did the reverse, taking the input data and making it higher-dimensional. This allowed them to combine data from different animals and remove technical variations that were not due to meaningful biological differences.In the paper, they used this technology, abbreviated as STACI, to identify how cells and tissues reveal the progression of Alzheimer’s disease when observed under a number of spatial and imaging techniques. The model can also be used to analyze any number of diseases, Zhang says.Given unlimited time and resources, her dream would be to build a fully complete model of human life. Unfortunately, both time and resources are limited. Her ambition isn’t, however, and she says she wants to keep applying her skills to solve the “most challenging questions that we don’t have the tools to answer.”She’s currently working on wrapping up a couple of projects, one focused on studying neurodegeneration by analyzing frontal cortex imaging and another on predicting protein images from protein sequences and chromatin imaging.“There are still many unanswered questions,” she says. “I want to pick questions that are biologically meaningful, that help us understand things we didn’t know before.” More

  • in

    When to trust an AI model

    Because machine-learning models can give false predictions, researchers often equip them with the ability to tell a user how confident they are about a certain decision. This is especially important in high-stake settings, such as when models are used to help identify disease in medical images or filter job applications.But a model’s uncertainty quantifications are only useful if they are accurate. If a model says it is 49 percent confident that a medical image shows a pleural effusion, then 49 percent of the time, the model should be right.MIT researchers have introduced a new approach that can improve uncertainty estimates in machine-learning models. Their method not only generates more accurate uncertainty estimates than other techniques, but does so more efficiently.In addition, because the technique is scalable, it can be applied to huge deep-learning models that are increasingly being deployed in health care and other safety-critical situations.This technique could give end users, many of whom lack machine-learning expertise, better information they can use to determine whether to trust a model’s predictions or if the model should be deployed for a particular task.“It is easy to see these models perform really well in scenarios where they are very good, and then assume they will be just as good in other scenarios. This makes it especially important to push this kind of work that seeks to better calibrate the uncertainty of these models to make sure they align with human notions of uncertainty,” says lead author Nathan Ng, a graduate student at the University of Toronto who is a visiting student at MIT.Ng wrote the paper with Roger Grosse, an assistant professor of computer science at the University of Toronto; and senior author Marzyeh Ghassemi, an associate professor in the Department of Electrical Engineering and Computer Science and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems. The research will be presented at the International Conference on Machine Learning.Quantifying uncertaintyUncertainty quantification methods often require complex statistical calculations that don’t scale well to machine-learning models with millions of parameters. These methods also require users to make assumptions about the model and data used to train it.The MIT researchers took a different approach. They use what is known as the minimum description length principle (MDL), which does not require the assumptions that can hamper the accuracy of other methods. MDL is used to better quantify and calibrate uncertainty for test points the model has been asked to label.The technique the researchers developed, known as IF-COMP, makes MDL fast enough to use with the kinds of large deep-learning models deployed in many real-world settings.MDL involves considering all possible labels a model could give a test point. If there are many alternative labels for this point that fit well, its confidence in the label it chose should decrease accordingly.“One way to understand how confident a model is would be to tell it some counterfactual information and see how likely it is to believe you,” Ng says.For example, consider a model that says a medical image shows a pleural effusion. If the researchers tell the model this image shows an edema, and it is willing to update its belief, then the model should be less confident in its original decision.With MDL, if a model is confident when it labels a datapoint, it should use a very short code to describe that point. If it is uncertain about its decision because the point could have many other labels, it uses a longer code to capture these possibilities.The amount of code used to label a datapoint is known as stochastic data complexity. If the researchers ask the model how willing it is to update its belief about a datapoint given contrary evidence, the stochastic data complexity should decrease if the model is confident.But testing each datapoint using MDL would require an enormous amount of computation.Speeding up the processWith IF-COMP, the researchers developed an approximation technique that can accurately estimate stochastic data complexity using a special function, known as an influence function. They also employed a statistical technique called temperature-scaling, which improves the calibration of the model’s outputs. This combination of influence functions and temperature-scaling enables high-quality approximations of the stochastic data complexity.In the end, IF-COMP can efficiently produce well-calibrated uncertainty quantifications that reflect a model’s true confidence. The technique can also determine whether the model has mislabeled certain data points or reveal which data points are outliers.The researchers tested their system on these three tasks and found that it was faster and more accurate than other methods.“It is really important to have some certainty that a model is well-calibrated, and there is a growing need to detect when a specific prediction doesn’t look quite right. Auditing tools are becoming more necessary in machine-learning problems as we use large amounts of unexamined data to make models that will be applied to human-facing problems,” Ghassemi says.IF-COMP is model-agnostic, so it can provide accurate uncertainty quantifications for many types of machine-learning models. This could enable it to be deployed in a wider range of real-world settings, ultimately helping more practitioners make better decisions.“People need to understand that these systems are very fallible and can make things up as they go. A model may look like it is highly confident, but there are a ton of different things it is willing to believe given evidence to the contrary,” Ng says.In the future, the researchers are interested in applying their approach to large language models and studying other potential use cases for the minimum description length principle.  More

  • in

    MIT ARCLab announces winners of inaugural Prize for AI Innovation in Space

    Satellite density in Earth’s orbit has increased exponentially in recent years, with lower costs of small satellites allowing governments, researchers, and private companies to launch and operate some 2,877 satellites into orbit in 2023 alone. This includes increased geostationary Earth orbit (GEO) satellite activity, which brings technologies with global-scale impact, from broadband internet to climate surveillance. Along with the manifold benefits of these satellite-enabled technologies, however, come increased safety and security risks, as well as environmental concerns. More accurate and efficient methods of monitoring and modeling satellite behavior are urgently needed to prevent collisions and other disasters.To address this challenge, the MIT Astrodynamics, Space Robotic, and Controls Laboratory (ARCLab) launched the MIT ARCLab Prize for AI Innovation in Space: a first-of-its-kind competition asking contestants to harness AI to characterize satellites’ patterns of life (PoLs) — the long-term behavioral narrative of a satellite in orbit — using purely passively collected information. Following the call for participants last fall, 126 teams used machine learning to create algorithms to label and time-stamp the behavioral modes of GEO satellites over a six-month period, competing for accuracy and efficiency.With support from the U.S. Department of the Air Force-MIT AI Accelerator, the challenge offers a total of $25,000. A team of judges from ARCLab and MIT Lincoln Laboratory evaluated the submissions based on clarity, novelty, technical depth, and reproducibility, assigning each entry a score out of 100 points. Now the judges have announced the winners and runners-up:First prize: David Baldsiefen — Team Hawaii2024With a winning score of 96, Baldsiefen will be awarded $10,000 and is invited to join the ARCLab team in presenting at a poster session at the Advanced Maui Optical and Space Surveillance Technologies (AMOS) Conference in Hawaii this fall. One evaluator noted, “Clear and concise report, with very good ideas such as the label encoding of the localizer. Decisions on the architectures and the feature engineering are well reasoned. The code provided is also well documented and structured, allowing an easy reproducibility of the experimentation.”Second prize: Binh Tran, Christopher Yeung, Kurtis Johnson, Nathan Metzger — Team Millennial-IUPWith a score of 94.2, Y, Millennial-IUP will be awarded $5,000 and will also join the ARCLab team at the AMOS conference. One evaluator said, “The models chosen were sensible and justified, they made impressive efforts in efficiency gains… They used physics to inform their models and this appeared to be reproducible. Overall it was an easy to follow, concise report without much jargon.”Third Prize: Isaac Haik and Francois Porcher — Team QR_IsWith a score of 94, Haik and Porcher will share the third prize of $3,000 and will also be invited to the AMOS conference with the ARCLab team. One evaluator noted, “This informative and interesting report describes the combination of ML and signal processing techniques in a compelling way, assisted by informative plots, tables, and sequence diagrams. The author identifies and describes a modular approach to class detection and their assessment of feature utility, which they correctly identify is not evenly useful across classes… Any lack of mission expertise is made up for by a clear and detailed discussion of the benefits and pitfalls of the methods they used and discussion of what they learned.”The fourth- through seventh-place scoring teams will each receive $1,000 and a certificate of excellence.“The goal of this competition was to foster an interdisciplinary approach to problem-solving in the space domain by inviting AI development experts to apply their skills in this new context of orbital capacity. And all of our winning teams really delivered — they brought technical skill, novel approaches, and expertise to a very impressive round of submissions.” says Professor Richard Linares, who heads ARCLab.Active modeling with passive dataThroughout a GEO satellite’s time in orbit, operators issue commands to place them in various behavioral modes—station-keeping, longitudinal shifts, end-of-life behaviors, and so on. Satellite Patterns of Life (PoLs) describe on-orbit behavior composed of sequences of both natural and non-natural behavior modes.ARCLab has developed a groundbreaking benchmarking tool for geosynchronous satellite pattern-of-life characterization and created the Satellite Pattern-of-Life Identification Dataset (SPLID), comprising real and synthetic space object data. The challenge participants used this tool to create algorithms that use AI to map out the on-orbit behaviors of a satellite.The goal of the MIT ARCLab Prize for AI Innovation in Space is to encourage technologists and enthusiasts to bring innovation and new skills sets to well-established challenges in aerospace. The team aims to hold the competition in 2025 and 2026 to explore other topics and invite experts in AI to apply their skills to new challenges.  More

  • in

    Community members receive 2024 MIT Excellence Awards, Collier Medal, and Staff Award for Distinction in Service

    On Wednesday, June 5, 13 individuals and four teams were awarded MIT Excellence Awards — the highest awards for staff at the Institute. Colleagues holding signs, waving pompoms, and cheering gathered in Kresge Auditorium to show their support for the honorees. In addition to the Excellence Awards, staff members were honored with the Collier Medal, the Staff Award for Distinction in Service, and the Gordon Y. Billard Award. The Collier Medal honors the memory of Officer Sean Collier, who gave his life protecting and serving MIT; it celebrates an individual or group whose actions demonstrate the importance of community. The Staff Award for Distinction in Service is presented to a staff member whose service results in a positive lasting impact on the Institute.The Gordon Y. Billard Award is given annually to staff, faculty, or an MIT-affiliated individual(s) who has given “special service of outstanding merit performed for the Institute.” This year, for the first time, this award was presented at the MIT Excellence Awards and Collier Medal celebration. The 2024 MIT Excellence Award recipients and their award categories are: Innovative Solutions Nanotechnology Material Core Staff, Koch Institute for Integrative Cancer Research, Office of the Vice President for Research (Margaret Bisher, Giovanni de Nola, David Mankus, and Dong Soo Yun)Bringing Out the Best Salvatore Ieni James Kelsey Lauren PouchakServing Our Community Megan Chester Alessandra Davy-Falconi David Randall Days Weekend Team, Department of Custodial Services, Department of Facilities: Karen Melisa Betancourth, Ana Guerra Chavarria, Yeshi Khando, Joao Pacheco, and Kevin Salazar IMES/HST Academic Office Team, Institute for Medical Engineering and Science, School of Engineering: Traci Anderson, Joseph R. Stein, and Laurie Ward Team Leriche, Department of Custodial Services, Department of Facilities: Anthony Anzalone, David Solomon Carrasco, Larrenton Forrest, Michael Leriche, and Joe VieiraEmbracing Diversity, Equity, and Inclusion Bhaskar Pant Jessica TamOutstanding Contributor Paul W. Barone Marcia G. Davidson Steven Kooi Tianjiao Lei Andrew H. Mack

    2024 MIT Excellence Awards + Collier Medal Ceremony

    The 2024 Collier Medal recipient was Benjamin B. Lewis, a graduate student in the Institute for Data, Systems and Society in the MIT Schwarzman College of Computing. Last spring, he founded the Cambridge branch of End Overdose, a nonprofit dedicated to reducing drug-related overdose deaths. Through his efforts, more than 600 members of the Greater Boston community, including many at MIT, have been trained to administer lifesaving treatment at critical moments.This year’s recipient of the 2024 Staff Award for Distinction in Service was Diego F. Arango (Department of Custodial Services, Department of Facilities), daytime custodian in Building 46. He was nominated by no fewer than 36 staff, faculty, students, and researchers for creating a positive working environment and for offering “help whenever, wherever, and to whomever needs it.”Three community members were honored with a 2024 Gordon Y. Billard AwardDeborah G. Douglas, senior director of collections and curator of science and technology, MIT MuseumRonald Hasseltine, assistant provost for research administration, Office of the Vice President for ResearchRichard K. Lester, vice provost for international activities and Japan Steel Industry Professor of Nuclear Science and Engineering, School of EngineeringPresenters included President Sally Kornbluth; MIT Chief of Police John DiFava and Deputy Chief Steven DeMarco; Vice President for Human Resources Ramona Allen; Executive Vice President and Treasurer Glen Shor; Provost Cynthia Barnhart; Lincoln Laboratory director Eric Evans; Chancellor Melissa Nobles; and Dean of the School of Engineering Anantha Chandrakasan.Visit the MIT Human Resources website for more information about the award recipients, categories, and to view photos and video of the event. More