More stories

  • in

    Study: When allocating scarce resources with AI, randomization can improve fairness

    Organizations are increasingly utilizing machine-learning models to allocate scarce resources or opportunities. For instance, such models can help companies screen resumes to choose job interview candidates or aid hospitals in ranking kidney transplant patients based on their likelihood of survival.When deploying a model, users typically strive to ensure its predictions are fair by reducing bias. This often involves techniques like adjusting the features a model uses to make decisions or calibrating the scores it generates.However, researchers from MIT and Northeastern University argue that these fairness methods are not sufficient to address structural injustices and inherent uncertainties. In a new paper, they show how randomizing a model’s decisions in a structured way can improve fairness in certain situations.For example, if multiple companies use the same machine-learning model to rank job interview candidates deterministically — without any randomization — then one deserving individual could be the bottom-ranked candidate for every job, perhaps due to how the model weighs answers provided in an online form. Introducing randomization into a model’s decisions could prevent one worthy person or group from always being denied a scarce resource, like a job interview.Through their analysis, the researchers found that randomization can be especially beneficial when a model’s decisions involve uncertainty or when the same group consistently receives negative decisions.They present a framework one could use to introduce a specific amount of randomization into a model’s decisions by allocating resources through a weighted lottery. This method, which an individual can tailor to fit their situation, can improve fairness without hurting the efficiency or accuracy of a model.“Even if you could make fair predictions, should you be deciding these social allocations of scarce resources or opportunities strictly off scores or rankings? As things scale, and we see more and more opportunities being decided by these algorithms, the inherent uncertainties in these scores can be amplified. We show that fairness may require some sort of randomization,” says Shomik Jain, a graduate student in the Institute for Data, Systems, and Society (IDSS) and lead author of the paper.Jain is joined on the paper by Kathleen Creel, assistant professor of philosophy and computer science at Northeastern University; and senior author Ashia Wilson, the Lister Brothers Career Development Professor in the Department of Electrical Engineering and Computer Science and a principal investigator in the Laboratory for Information and Decision Systems (LIDS). The research will be presented at the International Conference on Machine Learning.Considering claimsThis work builds off a previous paper in which the researchers explored harms that can occur when one uses deterministic systems at scale. They found that using a machine-learning model to deterministically allocate resources can amplify inequalities that exist in training data, which can reinforce bias and systemic inequality. “Randomization is a very useful concept in statistics, and to our delight, satisfies the fairness demands coming from both a systemic and individual point of view,” Wilson says.In this paper, they explored the question of when randomization can improve fairness. They framed their analysis around the ideas of philosopher John Broome, who wrote about the value of using lotteries to award scarce resources in a way that honors all claims of individuals.A person’s claim to a scarce resource, like a kidney transplant, can stem from merit, deservingness, or need. For instance, everyone has a right to life, and their claims on a kidney transplant may stem from that right, Wilson explains.“When you acknowledge that people have different claims to these scarce resources, fairness is going to require that we respect all claims of individuals. If we always give someone with a stronger claim the resource, is that fair?” Jain says.That sort of deterministic allocation could cause systemic exclusion or exacerbate patterned inequality, which occurs when receiving one allocation increases an individual’s likelihood of receiving future allocations. In addition, machine-learning models can make mistakes, and a deterministic approach could cause the same mistake to be repeated.Randomization can overcome these problems, but that doesn’t mean all decisions a model makes should be randomized equally.Structured randomizationThe researchers use a weighted lottery to adjust the level of randomization based on the amount of uncertainty involved in the model’s decision-making. A decision that is less certain should incorporate more randomization.“In kidney allocation, usually the planning is around projected lifespan, and that is deeply uncertain. If two patients are only five years apart, it becomes a lot harder to measure. We want to leverage that level of uncertainty to tailor the randomization,” Wilson says.The researchers used statistical uncertainty quantification methods to determine how much randomization is needed in different situations. They show that calibrated randomization can lead to fairer outcomes for individuals without significantly affecting the utility, or effectiveness, of the model.“There is a balance to be had between overall utility and respecting the rights of the individuals who are receiving a scarce resource, but oftentimes the tradeoff is relatively small,” says Wilson.However, the researchers emphasize there are situations where randomizing decisions would not improve fairness and could harm individuals, such as in criminal justice contexts.But there could be other areas where randomization can improve fairness, such as college admissions, and the researchers plan to study other use cases in future work. They also want to explore how randomization can affect other factors, such as competition or prices, and how it could be used to improve the robustness of machine-learning models.“We are hoping our paper is a first move toward illustrating that there might be a benefit to randomization. We are offering randomization as a tool. How much you are going to want to do it is going to be up to all the stakeholders in the allocation to decide. And, of course, how they decide is another research question all together,” says Wilson. More

  • in

    AI model identifies certain breast tumor stages likely to progress to invasive cancer

    Ductal carcinoma in situ (DCIS) is a type of preinvasive tumor that sometimes progresses to a highly deadly form of breast cancer. It accounts for about 25 percent of all breast cancer diagnoses.Because it is difficult for clinicians to determine the type and stage of DCIS, patients with DCIS are often overtreated. To address this, an interdisciplinary team of researchers from MIT and ETH Zurich developed an AI model that can identify the different stages of DCIS from a cheap and easy-to-obtain breast tissue image. Their model shows that both the state and arrangement of cells in a tissue sample are important for determining the stage of DCIS.Because such tissue images are so easy to obtain, the researchers were able to build one of the largest datasets of its kind, which they used to train and test their model. When they compared its predictions to conclusions of a pathologist, they found clear agreement in many instances.In the future, the model could be used as a tool to help clinicians streamline the diagnosis of simpler cases without the need for labor-intensive tests, giving them more time to evaluate cases where it is less clear if DCIS will become invasive.“We took the first step in understanding that we should be looking at the spatial organization of cells when diagnosing DCIS, and now we have developed a technique that is scalable. From here, we really need a prospective study. Working with a hospital and getting this all the way to the clinic will be an important step forward,” says Caroline Uhler, a professor in the Department of Electrical Engineering and Computer Science (EECS) and the Institute for Data, Systems, and Society (IDSS), who is also director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard and a researcher at MIT’s Laboratory for Information and Decision Systems (LIDS).Uhler, co-corresponding author of a paper on this research, is joined by lead author Xinyi Zhang, a graduate student in EECS and the Eric and Wendy Schmidt Center; co-corresponding author GV Shivashankar, professor of mechogenomics at ETH Zurich jointly with the Paul Scherrer Institute; and others at MIT, ETH Zurich, and the University of Palermo in Italy. The open-access research was published July 20 in Nature Communications.Combining imaging with AI   Between 30 and 50 percent of patients with DCIS develop a highly invasive stage of cancer, but researchers don’t know the biomarkers that could tell a clinician which tumors will progress.Researchers can use techniques like multiplexed staining or single-cell RNA sequencing to determine the stage of DCIS in tissue samples. However, these tests are too expensive to be performed widely, Shivashankar explains.In previous work, these researchers showed that a cheap imagining technique known as chromatin staining could be as informative as the much costlier single-cell RNA sequencing.For this research, they hypothesized that combining this single stain with a carefully designed machine-learning model could provide the same information about cancer stage as costlier techniques.First, they created a dataset containing 560 tissue sample images from 122 patients at three different stages of disease. They used this dataset to train an AI model that learns a representation of the state of each cell in a tissue sample image, which it uses to infer the stage of a patient’s cancer.However, not every cell is indicative of cancer, so the researchers had to aggregate them in a meaningful way.They designed the model to create clusters of cells in similar states, identifying eight states that are important markers of DCIS. Some cell states are more indicative of invasive cancer than others. The model determines the proportion of cells in each state in a tissue sample.Organization matters“But in cancer, the organization of cells also changes. We found that just having the proportions of cells in every state is not enough. You also need to understand how the cells are organized,” says Shivashankar.With this insight, they designed the model to consider proportion and arrangement of cell states, which significantly boosted its accuracy.“The interesting thing for us was seeing how much spatial organization matters. Previous studies had shown that cells which are close to the breast duct are important. But it is also important to consider which cells are close to which other cells,” says Zhang.When they compared the results of their model with samples evaluated by a pathologist, it had clear agreement in many instances. In cases that were not as clear-cut, the model could provide information about features in a tissue sample, like the organization of cells, that a pathologist could use in decision-making.This versatile model could also be adapted for use in other types of cancer, or even neurodegenerative conditions, which is one area the researchers are also currently exploring.“We have shown that, with the right AI techniques, this simple stain can be very powerful. There is still much more research to do, but we need to take the organization of cells into account in more of our studies,” Uhler says.This research was funded, in part, by the Eric and Wendy Schmidt Center at the Broad Institute, ETH Zurich, the Paul Scherrer Institute, the Swiss National Science Foundation, the U.S. National Institutes of Health, the U.S. Office of Naval Research, the MIT Jameel Clinic for Machine Learning and Health, the MIT-IBM Watson AI Lab, and a Simons Investigator Award. More

  • in

    How to assess a general-purpose AI model’s reliability before it’s deployed

    Foundation models are massive deep-learning models that have been pretrained on an enormous amount of general-purpose, unlabeled data. They can be applied to a variety of tasks, like generating images or answering customer questions.But these models, which serve as the backbone for powerful artificial intelligence tools like ChatGPT and DALL-E, can offer up incorrect or misleading information. In a safety-critical situation, such as a pedestrian approaching a self-driving car, these mistakes could have serious consequences.To help prevent such mistakes, researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task.They do this by considering a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable.When they compared their technique to state-of-the-art baseline methods, it was better at capturing the reliability of foundation models on a variety of downstream classification tasks.Someone could use this technique to decide if a model should be applied in a certain setting, without the need to test it on a real-world dataset. This could be especially useful when datasets may not be accessible due to privacy concerns, like in health care settings. In addition, the technique could be used to rank models based on reliability scores, enabling a user to select the best one for their task.“All models can be wrong, but models that know when they are wrong are more useful. The problem of quantifying uncertainty or reliability is more challenging for these foundation models because their abstract representations are difficult to compare. Our method allows one to quantify how reliable a representation model is for any given input data,” says senior author Navid Azizan, the Esther and Harold E. Edgerton Assistant Professor in the MIT Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS).He is joined on a paper about the work by lead author Young-Jin Park, a LIDS graduate student; Hao Wang, a research scientist at the MIT-IBM Watson AI Lab; and Shervin Ardeshir, a senior research scientist at Netflix. The paper will be presented at the Conference on Uncertainty in Artificial Intelligence.Measuring consensusTraditional machine-learning models are trained to perform a specific task. These models typically make a concrete prediction based on an input. For instance, the model might tell you whether a certain image contains a cat or a dog. In this case, assessing reliability could be a matter of looking at the final prediction to see if the model is right.But foundation models are different. The model is pretrained using general data, in a setting where its creators don’t know all downstream tasks it will be applied to. Users adapt it to their specific tasks after it has already been trained.Unlike traditional machine-learning models, foundation models don’t give concrete outputs like “cat” or “dog” labels. Instead, they generate an abstract representation based on an input data point.To assess the reliability of a foundation model, the researchers used an ensemble approach by training several models which share many properties but are slightly different from one another.“Our idea is like measuring the consensus. If all those foundation models are giving consistent representations for any data in our dataset, then we can say this model is reliable,” Park says.But they ran into a problem: How could they compare abstract representations?“These models just output a vector, comprised of some numbers, so we can’t compare them easily,” he adds.They solved this problem using an idea called neighborhood consistency.For their approach, the researchers prepare a set of reliable reference points to test on the ensemble of models. Then, for each model, they investigate the reference points located near that model’s representation of the test point.By looking at the consistency of neighboring points, they can estimate the reliability of the models.Aligning the representationsFoundation models map data points to what is known as a representation space. One way to think about this space is as a sphere. Each model maps similar data points to the same part of its sphere, so images of cats go in one place and images of dogs go in another.But each model would map animals differently in its own sphere, so while cats may be grouped near the South Pole of one sphere, another model could map cats somewhere in the Northern Hemisphere.The researchers use the neighboring points like anchors to align those spheres so they can make the representations comparable. If a data point’s neighbors are consistent across multiple representations, then one should be confident about the reliability of the model’s output for that point.When they tested this approach on a wide range of classification tasks, they found that it was much more consistent than baselines. Plus, it wasn’t tripped up by challenging test points that caused other methods to fail.Moreover, their approach can be used to assess reliability for any input data, so one could evaluate how well a model works for a particular type of individual, such as a patient with certain characteristics.“Even if the models all have average performance overall, from an individual point of view, you’d prefer the one that works best for that individual,” Wang says.However, one limitation comes from the fact that they must train an ensemble of foundation models, which is computationally expensive. In the future, they plan to find more efficient ways to build multiple models, perhaps by using small perturbations of a single model.“With the current trend of using foundational models for their embeddings to support various downstream tasks — from fine-tuning to retrieval augmented generation — the topic of quantifying uncertainty at the representation level is increasingly important, but challenging, as embeddings on their own have no grounding. What matters instead is how embeddings of different inputs are related to one another, an idea that this work neatly captures through the proposed neighborhood consistency score,” says Marco Pavone, an associate professor in the Department of Aeronautics and Astronautics at Stanford University, who was not involved with this work. “This is a promising step towards high quality uncertainty quantifications for embedding models, and I’m excited to see future extensions which can operate without requiring model-ensembling to really enable this approach to scale to foundation-size models.”This work is funded, in part, by the MIT-IBM Watson AI Lab, MathWorks, and Amazon. More

  • in

    When to trust an AI model

    Because machine-learning models can give false predictions, researchers often equip them with the ability to tell a user how confident they are about a certain decision. This is especially important in high-stake settings, such as when models are used to help identify disease in medical images or filter job applications.But a model’s uncertainty quantifications are only useful if they are accurate. If a model says it is 49 percent confident that a medical image shows a pleural effusion, then 49 percent of the time, the model should be right.MIT researchers have introduced a new approach that can improve uncertainty estimates in machine-learning models. Their method not only generates more accurate uncertainty estimates than other techniques, but does so more efficiently.In addition, because the technique is scalable, it can be applied to huge deep-learning models that are increasingly being deployed in health care and other safety-critical situations.This technique could give end users, many of whom lack machine-learning expertise, better information they can use to determine whether to trust a model’s predictions or if the model should be deployed for a particular task.“It is easy to see these models perform really well in scenarios where they are very good, and then assume they will be just as good in other scenarios. This makes it especially important to push this kind of work that seeks to better calibrate the uncertainty of these models to make sure they align with human notions of uncertainty,” says lead author Nathan Ng, a graduate student at the University of Toronto who is a visiting student at MIT.Ng wrote the paper with Roger Grosse, an assistant professor of computer science at the University of Toronto; and senior author Marzyeh Ghassemi, an associate professor in the Department of Electrical Engineering and Computer Science and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems. The research will be presented at the International Conference on Machine Learning.Quantifying uncertaintyUncertainty quantification methods often require complex statistical calculations that don’t scale well to machine-learning models with millions of parameters. These methods also require users to make assumptions about the model and data used to train it.The MIT researchers took a different approach. They use what is known as the minimum description length principle (MDL), which does not require the assumptions that can hamper the accuracy of other methods. MDL is used to better quantify and calibrate uncertainty for test points the model has been asked to label.The technique the researchers developed, known as IF-COMP, makes MDL fast enough to use with the kinds of large deep-learning models deployed in many real-world settings.MDL involves considering all possible labels a model could give a test point. If there are many alternative labels for this point that fit well, its confidence in the label it chose should decrease accordingly.“One way to understand how confident a model is would be to tell it some counterfactual information and see how likely it is to believe you,” Ng says.For example, consider a model that says a medical image shows a pleural effusion. If the researchers tell the model this image shows an edema, and it is willing to update its belief, then the model should be less confident in its original decision.With MDL, if a model is confident when it labels a datapoint, it should use a very short code to describe that point. If it is uncertain about its decision because the point could have many other labels, it uses a longer code to capture these possibilities.The amount of code used to label a datapoint is known as stochastic data complexity. If the researchers ask the model how willing it is to update its belief about a datapoint given contrary evidence, the stochastic data complexity should decrease if the model is confident.But testing each datapoint using MDL would require an enormous amount of computation.Speeding up the processWith IF-COMP, the researchers developed an approximation technique that can accurately estimate stochastic data complexity using a special function, known as an influence function. They also employed a statistical technique called temperature-scaling, which improves the calibration of the model’s outputs. This combination of influence functions and temperature-scaling enables high-quality approximations of the stochastic data complexity.In the end, IF-COMP can efficiently produce well-calibrated uncertainty quantifications that reflect a model’s true confidence. The technique can also determine whether the model has mislabeled certain data points or reveal which data points are outliers.The researchers tested their system on these three tasks and found that it was faster and more accurate than other methods.“It is really important to have some certainty that a model is well-calibrated, and there is a growing need to detect when a specific prediction doesn’t look quite right. Auditing tools are becoming more necessary in machine-learning problems as we use large amounts of unexamined data to make models that will be applied to human-facing problems,” Ghassemi says.IF-COMP is model-agnostic, so it can provide accurate uncertainty quantifications for many types of machine-learning models. This could enable it to be deployed in a wider range of real-world settings, ultimately helping more practitioners make better decisions.“People need to understand that these systems are very fallible and can make things up as they go. A model may look like it is highly confident, but there are a ton of different things it is willing to believe given evidence to the contrary,” Ng says.In the future, the researchers are interested in applying their approach to large language models and studying other potential use cases for the minimum description length principle.  More

  • in

    MIT ARCLab announces winners of inaugural Prize for AI Innovation in Space

    Satellite density in Earth’s orbit has increased exponentially in recent years, with lower costs of small satellites allowing governments, researchers, and private companies to launch and operate some 2,877 satellites into orbit in 2023 alone. This includes increased geostationary Earth orbit (GEO) satellite activity, which brings technologies with global-scale impact, from broadband internet to climate surveillance. Along with the manifold benefits of these satellite-enabled technologies, however, come increased safety and security risks, as well as environmental concerns. More accurate and efficient methods of monitoring and modeling satellite behavior are urgently needed to prevent collisions and other disasters.To address this challenge, the MIT Astrodynamics, Space Robotic, and Controls Laboratory (ARCLab) launched the MIT ARCLab Prize for AI Innovation in Space: a first-of-its-kind competition asking contestants to harness AI to characterize satellites’ patterns of life (PoLs) — the long-term behavioral narrative of a satellite in orbit — using purely passively collected information. Following the call for participants last fall, 126 teams used machine learning to create algorithms to label and time-stamp the behavioral modes of GEO satellites over a six-month period, competing for accuracy and efficiency.With support from the U.S. Department of the Air Force-MIT AI Accelerator, the challenge offers a total of $25,000. A team of judges from ARCLab and MIT Lincoln Laboratory evaluated the submissions based on clarity, novelty, technical depth, and reproducibility, assigning each entry a score out of 100 points. Now the judges have announced the winners and runners-up:First prize: David Baldsiefen — Team Hawaii2024With a winning score of 96, Baldsiefen will be awarded $10,000 and is invited to join the ARCLab team in presenting at a poster session at the Advanced Maui Optical and Space Surveillance Technologies (AMOS) Conference in Hawaii this fall. One evaluator noted, “Clear and concise report, with very good ideas such as the label encoding of the localizer. Decisions on the architectures and the feature engineering are well reasoned. The code provided is also well documented and structured, allowing an easy reproducibility of the experimentation.”Second prize: Binh Tran, Christopher Yeung, Kurtis Johnson, Nathan Metzger — Team Millennial-IUPWith a score of 94.2, Y, Millennial-IUP will be awarded $5,000 and will also join the ARCLab team at the AMOS conference. One evaluator said, “The models chosen were sensible and justified, they made impressive efforts in efficiency gains… They used physics to inform their models and this appeared to be reproducible. Overall it was an easy to follow, concise report without much jargon.”Third Prize: Isaac Haik and Francois Porcher — Team QR_IsWith a score of 94, Haik and Porcher will share the third prize of $3,000 and will also be invited to the AMOS conference with the ARCLab team. One evaluator noted, “This informative and interesting report describes the combination of ML and signal processing techniques in a compelling way, assisted by informative plots, tables, and sequence diagrams. The author identifies and describes a modular approach to class detection and their assessment of feature utility, which they correctly identify is not evenly useful across classes… Any lack of mission expertise is made up for by a clear and detailed discussion of the benefits and pitfalls of the methods they used and discussion of what they learned.”The fourth- through seventh-place scoring teams will each receive $1,000 and a certificate of excellence.“The goal of this competition was to foster an interdisciplinary approach to problem-solving in the space domain by inviting AI development experts to apply their skills in this new context of orbital capacity. And all of our winning teams really delivered — they brought technical skill, novel approaches, and expertise to a very impressive round of submissions.” says Professor Richard Linares, who heads ARCLab.Active modeling with passive dataThroughout a GEO satellite’s time in orbit, operators issue commands to place them in various behavioral modes—station-keeping, longitudinal shifts, end-of-life behaviors, and so on. Satellite Patterns of Life (PoLs) describe on-orbit behavior composed of sequences of both natural and non-natural behavior modes.ARCLab has developed a groundbreaking benchmarking tool for geosynchronous satellite pattern-of-life characterization and created the Satellite Pattern-of-Life Identification Dataset (SPLID), comprising real and synthetic space object data. The challenge participants used this tool to create algorithms that use AI to map out the on-orbit behaviors of a satellite.The goal of the MIT ARCLab Prize for AI Innovation in Space is to encourage technologists and enthusiasts to bring innovation and new skills sets to well-established challenges in aerospace. The team aims to hold the competition in 2025 and 2026 to explore other topics and invite experts in AI to apply their skills to new challenges.  More

  • in

    “They can see themselves shaping the world they live in”

    During the journey from the suburbs to the city, the tree canopy often dwindles down as skyscrapers rise up. A group of New England Innovation Academy students wondered why that is.“Our friend Victoria noticed that where we live in Marlborough there are lots of trees in our own backyards. But if you drive just 30 minutes to Boston, there are almost no trees,” said high school junior Ileana Fournier. “We were struck by that duality.”This inspired Fournier and her classmates Victoria Leeth and Jessie Magenyi to prototype a mobile app that illustrates Massachusetts deforestation trends for Day of AI, a free, hands-on curriculum developed by the MIT Responsible AI for Social Empowerment and Education (RAISE) initiative, headquartered in the MIT Media Lab and in collaboration with the MIT Schwarzman College of Computing and MIT Open Learning. They were among a group of 20 students from New England Innovation Academy who shared their projects during the 2024 Day of AI global celebration hosted with the Museum of Science.The Day of AI curriculum introduces K-12 students to artificial intelligence. Now in its third year, Day of AI enables students to improve their communities and collaborate on larger global challenges using AI. Fournier, Leeth, and Magenyi’s TreeSavers app falls under the Telling Climate Stories with Data module, one of four new climate-change-focused lessons.“We want you to be able to express yourselves creatively to use AI to solve problems with critical-thinking skills,” Cynthia Breazeal, director of MIT RAISE, dean for digital learning at MIT Open Learning, and professor of media arts and sciences, said during this year’s Day of AI global celebration at the Museum of Science. “We want you to have an ethical and responsible way to think about this really powerful, cool, and exciting technology.”Moving from understanding to actionDay of AI invites students to examine the intersection of AI and various disciplines, such as history, civics, computer science, math, and climate change. With the curriculum available year-round, more than 10,000 educators across 114 countries have brought Day of AI activities to their classrooms and homes.The curriculum gives students the agency to evaluate local issues and invent meaningful solutions. “We’re thinking about how to create tools that will allow kids to have direct access to data and have a personal connection that intersects with their lived experiences,” Robert Parks, curriculum developer at MIT RAISE, said at the Day of AI global celebration.Before this year, first-year Jeremie Kwapong said he knew very little about AI. “I was very intrigued,” he said. “I started to experiment with ChatGPT to see how it reacts. How close can I get this to human emotion? What is AI’s knowledge compared to a human’s knowledge?”In addition to helping students spark an interest in AI literacy, teachers around the world have told MIT RAISE that they want to use data science lessons to engage students in conversations about climate change. Therefore, Day of AI’s new hands-on projects use weather and climate change to show students why it’s important to develop a critical understanding of dataset design and collection when observing the world around them.“There is a lag between cause and effect in everyday lives,” said Parks. “Our goal is to demystify that, and allow kids to access data so they can see a long view of things.”Tools like MIT App Inventor — which allows anyone to create a mobile application — help students make sense of what they can learn from data. Fournier, Leeth, and Magenyi programmed TreeSavers in App Inventor to chart regional deforestation rates across Massachusetts, identify ongoing trends through statistical models, and predict environmental impact. The students put that “long view” of climate change into practice when developing TreeSavers’ interactive maps. Users can toggle between Massachusetts’s current tree cover, historical data, and future high-risk areas.Although AI provides fast answers, it doesn’t necessarily offer equitable solutions, said David Sittenfeld, director of the Center for the Environment at the Museum of Science. The Day of AI curriculum asks students to make decisions on sourcing data, ensuring unbiased data, and thinking responsibly about how findings could be used.“There’s an ethical concern about tracking people’s data,” said Ethan Jorda, a New England Innovation Academy student. His group used open-source data to program an app that helps users track and reduce their carbon footprint.Christine Cunningham, senior vice president of STEM Learning at the Museum of Science, believes students are prepared to use AI responsibly to make the world a better place. “They can see themselves shaping the world they live in,” said Cunningham. “Moving through from understanding to action, kids will never look at a bridge or a piece of plastic lying on the ground in the same way again.”Deepening collaboration on earth and beyondThe 2024 Day of AI speakers emphasized collaborative problem solving at the local, national, and global levels.“Through different ideas and different perspectives, we’re going to get better solutions,” said Cunningham. “How do we start young enough that every child has a chance to both understand the world around them but also to move toward shaping the future?”Presenters from MIT, the Museum of Science, and NASA approached this question with a common goal — expanding STEM education to learners of all ages and backgrounds.“We have been delighted to collaborate with the MIT RAISE team to bring this year’s Day of AI celebration to the Museum of Science,” says Meg Rosenburg, manager of operations at the Museum of Science Centers for Public Science Learning. “This opportunity to highlight the new climate modules for the curriculum not only perfectly aligns with the museum’s goals to focus on climate and active hope throughout our Year of the Earthshot initiative, but it has also allowed us to bring our teams together and grow a relationship that we are very excited to build upon in the future.”Rachel Connolly, systems integration and analysis lead for NASA’s Science Activation Program, showed the power of collaboration with the example of how human comprehension of Saturn’s appearance has evolved. From Galileo’s early telescope to the Cassini space probe, modern imaging of Saturn represents 400 years of science, technology, and math working together to further knowledge.“Technologies, and the engineers who built them, advance the questions we’re able to ask and therefore what we’re able to understand,” said Connolly, research scientist at MIT Media Lab.New England Innovation Academy students saw an opportunity for collaboration a little closer to home. Emmett Buck-Thompson, Jeff Cheng, and Max Hunt envisioned a social media app to connect volunteers with local charities. Their project was inspired by Buck-Thompson’s father’s difficulties finding volunteering opportunities, Hunt’s role as the president of the school’s Community Impact Club, and Cheng’s aspiration to reduce screen time for social media users. Using MIT App Inventor, ​their combined ideas led to a prototype with the potential to make a real-world impact in their community.The Day of AI curriculum teaches the mechanics of AI, ethical considerations and responsible uses, and interdisciplinary applications for different fields. It also empowers students to become creative problem solvers and engaged citizens in their communities and online. From supporting volunteer efforts to encouraging action for the state’s forests to tackling the global challenge of climate change, today’s students are becoming tomorrow’s leaders with Day of AI.“We want to empower you to know that this is a tool you can use to make your community better, to help people around you with this technology,” said Breazeal.Other Day of AI speakers included Tim Ritchie, president of the Museum of Science; Michael Lawrence Evans, program director of the Boston Mayor’s Office of New Urban Mechanics; Dava Newman, director of the MIT Media Lab; and Natalie Lao, executive director of the App Inventor Foundation. More

  • in

    MIT researchers introduce generative AI for databases

    A new tool makes it easier for database users to perform complicated statistical analyses of tabular data without the need to know what is going on behind the scenes.GenSQL, a generative AI system for databases, could help users make predictions, detect anomalies, guess missing values, fix errors, or generate synthetic data with just a few keystrokes.For instance, if the system were used to analyze medical data from a patient who has always had high blood pressure, it could catch a blood pressure reading that is low for that particular patient but would otherwise be in the normal range.GenSQL automatically integrates a tabular dataset and a generative probabilistic AI model, which can account for uncertainty and adjust their decision-making based on new data.Moreover, GenSQL can be used to produce and analyze synthetic data that mimic the real data in a database. This could be especially useful in situations where sensitive data cannot be shared, such as patient health records, or when real data are sparse.This new tool is built on top of SQL, a programming language for database creation and manipulation that was introduced in the late 1970s and is used by millions of developers worldwide.“Historically, SQL taught the business world what a computer could do. They didn’t have to write custom programs, they just had to ask questions of a database in high-level language. We think that, when we move from just querying data to asking questions of models and data, we are going to need an analogous language that teaches people the coherent questions you can ask a computer that has a probabilistic model of the data,” says Vikash Mansinghka ’05, MEng ’09, PhD ’09, senior author of a paper introducing GenSQL and a principal research scientist and leader of the Probabilistic Computing Project in the MIT Department of Brain and Cognitive Sciences.When the researchers compared GenSQL to popular, AI-based approaches for data analysis, they found that it was not only faster but also produced more accurate results. Importantly, the probabilistic models used by GenSQL are explainable, so users can read and edit them.“Looking at the data and trying to find some meaningful patterns by just using some simple statistical rules might miss important interactions. You really want to capture the correlations and the dependencies of the variables, which can be quite complicated, in a model. With GenSQL, we want to enable a large set of users to query their data and their model without having to know all the details,” adds lead author Mathieu Huot, a research scientist in the Department of Brain and Cognitive Sciences and member of the Probabilistic Computing Project.They are joined on the paper by Matin Ghavami and Alexander Lew, MIT graduate students; Cameron Freer, a research scientist; Ulrich Schaechtel and Zane Shelby of Digital Garage; Martin Rinard, an MIT professor in the Department of Electrical Engineering and Computer Science and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Feras Saad ’15, MEng ’16, PhD ’22, an assistant professor at Carnegie Mellon University. The research was recently presented at the ACM Conference on Programming Language Design and Implementation.Combining models and databasesSQL, which stands for structured query language, is a programming language for storing and manipulating information in a database. In SQL, people can ask questions about data using keywords, such as by summing, filtering, or grouping database records.However, querying a model can provide deeper insights, since models can capture what data imply for an individual. For instance, a female developer who wonders if she is underpaid is likely more interested in what salary data mean for her individually than in trends from database records.The researchers noticed that SQL didn’t provide an effective way to incorporate probabilistic AI models, but at the same time, approaches that use probabilistic models to make inferences didn’t support complex database queries.They built GenSQL to fill this gap, enabling someone to query both a dataset and a probabilistic model using a straightforward yet powerful formal programming language.A GenSQL user uploads their data and probabilistic model, which the system automatically integrates. Then, she can run queries on data that also get input from the probabilistic model running behind the scenes. This not only enables more complex queries but can also provide more accurate answers.For instance, a query in GenSQL might be something like, “How likely is it that a developer from Seattle knows the programming language Rust?” Just looking at a correlation between columns in a database might miss subtle dependencies. Incorporating a probabilistic model can capture more complex interactions.   Plus, the probabilistic models GenSQL utilizes are auditable, so people can see which data the model uses for decision-making. In addition, these models provide measures of calibrated uncertainty along with each answer.For instance, with this calibrated uncertainty, if one queries the model for predicted outcomes of different cancer treatments for a patient from a minority group that is underrepresented in the dataset, GenSQL would tell the user that it is uncertain, and how uncertain it is, rather than overconfidently advocating for the wrong treatment.Faster and more accurate resultsTo evaluate GenSQL, the researchers compared their system to popular baseline methods that use neural networks. GenSQL was between 1.7 and 6.8 times faster than these approaches, executing most queries in a few milliseconds while providing more accurate results.They also applied GenSQL in two case studies: one in which the system identified mislabeled clinical trial data and the other in which it generated accurate synthetic data that captured complex relationships in genomics.Next, the researchers want to apply GenSQL more broadly to conduct largescale modeling of human populations. With GenSQL, they can generate synthetic data to draw inferences about things like health and salary while controlling what information is used in the analysis.They also want to make GenSQL easier to use and more powerful by adding new optimizations and automation to the system. In the long run, the researchers want to enable users to make natural language queries in GenSQL. Their goal is to eventually develop a ChatGPT-like AI expert one could talk to about any database, which grounds its answers using GenSQL queries.   This research is funded, in part, by the Defense Advanced Research Projects Agency (DARPA), Google, and the Siegel Family Foundation. More

  • in

    MIT-Takeda Program wraps up with 16 publications, a patent, and nearly two dozen projects completed

    When the Takeda Pharmaceutical Co. and the MIT School of Engineering launched their collaboration focused on artificial intelligence in health care and drug development in February 2020, society was on the cusp of a globe-altering pandemic and AI was far from the buzzword it is today.As the program concludes, the world looks very different. AI has become a transformative technology across industries including health care and pharmaceuticals, while the pandemic has altered the way many businesses approach health care and changed how they develop and sell medicines.For both MIT and Takeda, the program has been a game-changer.When it launched, the collaborators hoped the program would help solve tangible, real-world problems. By its end, the program has yielded a catalog of new research papers, discoveries, and lessons learned, including a patent for a system that could improve the manufacturing of small-molecule medicines.Ultimately, the program allowed both entities to create a foundation for a world where AI and machine learning play a pivotal role in medicine, leveraging Takeda’s expertise in biopharmaceuticals and the MIT researchers’ deep understanding of AI and machine learning.“The MIT-Takeda Program has been tremendously impactful and is a shining example of what can be accomplished when experts in industry and academia work together to develop solutions,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “In addition to resulting in research that has advanced how we use AI and machine learning in health care, the program has opened up new opportunities for MIT faculty and students through fellowships, funding, and networking.”What made the program unique was that it was centered around several concrete challenges spanning drug development that Takeda needed help addressing. MIT faculty had the opportunity to select the projects based on their area of expertise and general interest, allowing them to explore new areas within health care and drug development.“It was focused on Takeda’s toughest business problems,” says Anne Heatherington, Takeda’s research and development chief data and technology officer and head of its Data Sciences Institute.“They were problems that colleagues were really struggling with on the ground,” adds Simon Davies, the executive director of the MIT-Takeda Program and Takeda’s global head of statistical and quantitative sciences. Takeda saw an opportunity to collaborate with MIT’s world-class researchers, who were working only a few blocks away. Takeda, a global pharmaceutical company with global headquarters in Japan, has its global business units and R&D center just down the street from the Institute.As part of the program, MIT faculty were able to select what issues they were interested in working on from a group of potential Takeda projects. Then, collaborative teams including MIT researchers and Takeda employees approached research questions in two rounds. Over the course of the program, collaborators worked on 22 projects focused on topics including drug discovery and research, clinical drug development, and pharmaceutical manufacturing. Over 80 MIT students and faculty joined more than 125 Takeda researchers and staff on teams addressing these research questions.The projects centered around not only hard problems, but also the potential for solutions to scale within Takeda or within the biopharmaceutical industry more broadly.Some of the program’s findings have already resulted in wider studies. One group’s results, for instance, showed that using artificial intelligence to analyze speech may allow for earlier detection of frontotemporal dementia, while making that diagnosis more quickly and inexpensively. Similar algorithmic analyses of speech in patients diagnosed with ALS may also help clinicians understand the progression of that disease. Takeda is continuing to test both AI applications.Other discoveries and AI models that resulted from the program’s research have already had an impact. Using a physical model and AI learning algorithms can help detect particle size, mix, and consistency for powdered, small-molecule medicines, for instance, speeding up production timelines. Based on their research under the program, collaborators have filed for a patent for that technology.For injectable medicines like vaccines, AI-enabled inspections can also reduce process time and false rejection rates. Replacing human visual inspections with AI processes has already shown measurable impact for the pharmaceutical company.Heatherington adds, “our lessons learned are really setting the stage for what we’re doing next, really embedding AI and gen-AI [generative AI] into everything that we do moving forward.”Over the course of the program, more than 150 Takeda researchers and staff also participated in educational programming organized by the Abdul Latif Jameel Clinic for Machine Learning in Health. In addition to providing research opportunities, the program funded 10 students through SuperUROP, the Advanced Undergraduate Research Opportunities Program, as well as two cohorts from the DHIVE health-care innovation program, part of the MIT Sandbox Innovation Fund Program.Though the formal program has ended, certain aspects of the collaboration will continue, such as the MIT-Takeda Fellows, which supports graduate students as they pursue groundbreaking research related to health and AI. During its run, the program supported 44 MIT-Takeda Fellows and will continue to support MIT students through an endowment fund. Organic collaboration between MIT and Takeda researchers will also carry forward. And the programs’ collaborators are working to create a model for similar academic and industry partnerships to widen the impact of this first-of-its-kind collaboration.  More