More stories

  • in

    Technique could efficiently solve partial differential equations for numerous applications

    In fields such as physics and engineering, partial differential equations (PDEs) are used to model complex physical processes to generate insight into how some of the most complicated physical and natural systems in the world function.

    To solve these difficult equations, researchers use high-fidelity numerical solvers, which can be very time-consuming and computationally expensive to run. The current simplified alternative, data-driven surrogate models, compute the goal property of a solution to PDEs rather than the whole solution. Those are trained on a set of data that has been generated by the high-fidelity solver, to predict the output of the PDEs for new inputs. This is data-intensive and expensive because complex physical systems require a large number of simulations to generate enough data. 

    In a new paper, “Physics-enhanced deep surrogates for partial differential equations,” published in December in Nature Machine Intelligence, a new method is proposed for developing data-driven surrogate models for complex physical systems in such fields as mechanics, optics, thermal transport, fluid dynamics, physical chemistry, and climate models.

    The paper was authored by MIT’s professor of applied mathematics Steven G. Johnson along with Payel Das and Youssef Mroueh of the MIT-IBM Watson AI Lab and IBM Research; Chris Rackauckas of Julia Lab; and Raphaël Pestourie, a former MIT postdoc who is now at Georgia Tech. The authors call their method “physics-enhanced deep surrogate” (PEDS), which combines a low-fidelity, explainable physics simulator with a neural network generator. The neural network generator is trained end-to-end to match the output of the high-fidelity numerical solver.

    “My aspiration is to replace the inefficient process of trial and error with systematic, computer-aided simulation and optimization,” says Pestourie. “Recent breakthroughs in AI like the large language model of ChatGPT rely on hundreds of billions of parameters and require vast amounts of resources to train and evaluate. In contrast, PEDS is affordable to all because it is incredibly efficient in computing resources and has a very low barrier in terms of infrastructure needed to use it.”

    In the article, they show that PEDS surrogates can be up to three times more accurate than an ensemble of feedforward neural networks with limited data (approximately 1,000 training points), and reduce the training data needed by at least a factor of 100 to achieve a target error of 5 percent. Developed using the MIT-designed Julia programming language, this scientific machine-learning method is thus efficient in both computing and data.

    The authors also report that PEDS provides a general, data-driven strategy to bridge the gap between a vast array of simplified physical models with corresponding brute-force numerical solvers modeling complex systems. This technique offers accuracy, speed, data efficiency, and physical insights into the process.

    Says Pestourie, “Since the 2000s, as computing capabilities improved, the trend of scientific models has been to increase the number of parameters to fit the data better, sometimes at the cost of a lower predictive accuracy. PEDS does the opposite by choosing its parameters smartly. It leverages the technology of automatic differentiation to train a neural network that makes a model with few parameters accurate.”

    “The main challenge that prevents surrogate models from being used more widely in engineering is the curse of dimensionality — the fact that the needed data to train a model increases exponentially with the number of model variables,” says Pestourie. “PEDS reduces this curse by incorporating information from the data and from the field knowledge in the form of a low-fidelity model solver.”

    The researchers say that PEDS has the potential to revive a whole body of the pre-2000 literature dedicated to minimal models — intuitive models that PEDS could make more accurate while also being predictive for surrogate model applications.

    “The application of the PEDS framework is beyond what we showed in this study,” says Das. “Complex physical systems governed by PDEs are ubiquitous, from climate modeling to seismic modeling and beyond. Our physics-inspired fast and explainable surrogate models will be of great use in those applications, and play a complementary role to other emerging techniques, like foundation models.”

    The research was supported by the MIT-IBM Watson AI Lab and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies.  More

  • in

    Inclusive research for social change

    Pair a decades-old program dedicated to creating research opportunities for underrepresented minorities and populations with a growing initiative committed to tackling the very issues at the heart of such disparities, and you’ll get a transformative partnership that only MIT can deliver. 

    Since 1986, the MIT Summer Research Program (MSRP) has led an institutional effort to prepare underrepresented students (minorities, women in STEM, or students with low socioeconomic status) for doctoral education by pairing them with MIT labs and research groups. For the past three years, the Initiative on Combatting Systemic Racism (ICSR), a cross-disciplinary research collaboration led by MIT’s Institute for Data, Systems, and Society (IDSS), has joined them in their mission, helping bring the issue full circle by providing MSRP students with the opportunity to use big data and computational tools to create impactful changes toward racial equity.

    “ICSR has further enabled our direct engagement with undergrads, both within and outside of MIT,” says Fotini Christia, the Ford International Professor of the Social Sciences, associate director of IDSS, and co-organizer for the initiative. “We’ve found that this line of research has attracted students interested in examining these topics with the most rigorous methods.”

    The initiative fits well under the IDSS banner, as IDSS research seeks solutions to complex societal issues through a multidisciplinary approach that includes statistics, computation, modeling, social science methodologies, human behavior, and an understanding of complex systems. With the support of faculty and researchers from all five schools and the MIT Schwarzman College of Computing, the objective of ICSR is to work on an array of different societal aspects of systemic racism through a set of verticals including policing, housing, health care, and social media.

    Where passion meets impact

    Grinnell senior Mia Hines has always dreamed of using her love for computer science to support social justice. She has experience working with unhoused people and labor unions, and advocating for Indigenous peoples’ rights. When applying to college, she focused her essay on using technology to help Syrian refugees.

    “As a Black woman, it’s very important to me that we focus on these areas, especially on how we can use technology to help marginalized communities,” Hines says. “And also, how do we stop technology or improve technology that is already hurting marginalized communities?”   

    Through MSRP, Hines was paired with research advisor Ufuoma Ovienmhada, a fourth-year doctoral student in the Department of Aeronautics and Astronautics at MIT. A member of Professor Danielle Wood’s Space Enabled research group at MIT’s Media Lab, Ovienmhada received funding from an ICSR Seed Grant and NASA’s Applied Sciences Program to support her ongoing research measuring environmental injustice and socioeconomic disparities in prison landscapes. 

    “I had been doing satellite remote sensing for environmental challenges and sustainability, starting out looking at coastal ecosystems, when I learned about an issue called ‘prison ecology,’” Ovienmhada explains. “This refers to the intersection of mass incarceration and environmental justice.”

    Ovienmhada’s research uses satellite remote sensing and environmental data to characterize exposures to different environmental hazards such as air pollution, extreme heat, and flooding. “This allows others to use these datasets for real-time advocacy, in addition to creating public awareness,” she says.

    Focused especially on extreme heat, Hines used satellite remote sensing to monitor the fluctuation of temperature to assess the risk being imposed on prisoners, including death, especially in states like Texas, where 75 percent of prisons either don’t have full air conditioning or have none at all.

    “Before this project I had done little to no work with geospatial data, and as a budding data scientist, getting to work with and understanding different types of data and resources is really helpful,” Hines says. “I was also funded and afforded the flexibility to take advantage of IDSS’s Data Science and Machine Learning online course. It was really great to be able to do that and learn even more.”

    Filling the gap

    Much like Hines, Harvey Mudd senior Megan Li was specifically interested in the IDSS-supported MSRP projects. She was drawn to the interdisciplinary approach, and she seeks in her own work to apply computational methods to societal issues and to make computer science more inclusive, considerate, and ethical. 

    Working with Aurora Zhang, a grad student in IDSS’s Social and Engineering Systems PhD program, Li used county-level data on income and housing prices to quantify and visualize how affordability based on income alone varies across the United States. She then expanded the analysis to include assets and debt to determine the most common barriers to home ownership.

    “I spent my day-to-day looking at census data and writing Python scripts that could work with it,” reports Li. “I also reached out to the Census Bureau directly to learn a little bit more about how they did their data collection, and discussed questions related to some of their previous studies and working papers that I had reviewed.” 

    Outside of actual day-to-day research, Li says she learned a lot in conversations with fellow researchers, particularly changing her “skeptical view” of whether or not mortgage lending algorithms would help or hurt home buyers in the approval process. “I think I have a little bit more faith now, which is a good thing.”

    “Harvey Mudd is undergraduate-only, and while professors do run labs here, my specific research areas are not well represented,” Li says. “This opportunity was enormous in that I got the experience I need to see if this research area is actually something that I want to do long term, and I got more mirrors into what I would be doing in grad school from talking to students and getting to know faculty.”

    Closing the loop

    While participating in MSRP offered crucial research experience to Hines, the ICSR projects enabled her to engage in topics she’s passionate about and work that could drive tangible societal change.

    “The experience felt much more concrete because we were working on these very sophisticated projects, in a supportive environment where people were very excited to work with us,” she says.

    A significant benefit for Li was the chance to steer her research in alignment with her own interests. “I was actually given the opportunity to propose my own research idea, versus supporting a graduate student’s work in progress,” she explains. 

    For Ovienmhada, the pairing of the two initiatives solidifies the efforts of MSRP and closes a crucial loop in diversity, equity, and inclusion advocacy. 

    “I’ve participated in a lot of different DEI-related efforts and advocacy and one thing that always comes up is the fact that it’s not just about bringing people in, it’s also about creating an environment and opportunities that align with people’s values,” Ovienmhada says. “Programs like MSRP and ICSR create opportunities for people who want to do work that’s aligned with certain values by providing the needed mentoring and financial support.” More

  • in

    Leveraging language to understand machines

    Natural language conveys ideas, actions, information, and intent through context and syntax; further, there are volumes of it contained in databases. This makes it an excellent source of data to train machine-learning systems on. Two master’s of engineering students in the 6A MEng Thesis Program at MIT, Irene Terpstra ’23 and Rujul Gandhi ’22, are working with mentors in the MIT-IBM Watson AI Lab to use this power of natural language to build AI systems.

    As computing is becoming more advanced, researchers are looking to improve the hardware that they run on; this means innovating to create new computer chips. And, since there is literature already available on modifications that can be made to achieve certain parameters and performance, Terpstra and her mentors and advisors Anantha Chandrakasan, MIT School of Engineering dean and the Vannevar Bush Professor of Electrical Engineering and Computer Science, and IBM’s researcher Xin Zhang, are developing an AI algorithm that assists in chip design.

    “I’m creating a workflow to systematically analyze how these language models can help the circuit design process. What reasoning powers do they have, and how can it be integrated into the chip design process?” says Terpstra. “And then on the other side, if that proves to be useful enough, [we’ll] see if they can automatically design the chips themselves, attaching it to a reinforcement learning algorithm.”

    To do this, Terpstra’s team is creating an AI system that can iterate on different designs. It means experimenting with various pre-trained large language models (like ChatGPT, Llama 2, and Bard), using an open-source circuit simulator language called NGspice, which has the parameters of the chip in code form, and a reinforcement learning algorithm. With text prompts, researchers will be able to query how the physical chip should be modified to achieve a certain goal in the language model and produced guidance for adjustments. This is then transferred into a reinforcement learning algorithm that updates the circuit design and outputs new physical parameters of the chip.

    “The final goal would be to combine the reasoning powers and the knowledge base that is baked into these large language models and combine that with the optimization power of the reinforcement learning algorithms and have that design the chip itself,” says Terpstra.

    Rujul Gandhi works with the raw language itself. As an undergraduate at MIT, Gandhi explored linguistics and computer sciences, putting them together in her MEng work. “I’ve been interested in communication, both between just humans and between humans and computers,” Gandhi says.

    Robots or other interactive AI systems are one area where communication needs to be understood by both humans and machines. Researchers often write instructions for robots using formal logic. This helps ensure that commands are being followed safely and as intended, but formal logic can be difficult for users to understand, while natural language comes easily. To ensure this smooth communication, Gandhi and her advisors Yang Zhang of IBM and MIT assistant professor Chuchu Fan are building a parser that converts natural language instructions into a machine-friendly form. Leveraging the linguistic structure encoded by the pre-trained encoder-decoder model T5, and a dataset of annotated, basic English commands for performing certain tasks, Gandhi’s system identifies the smallest logical units, or atomic propositions, which are present in a given instruction.

    “Once you’ve given your instruction, the model identifies all the smaller sub-tasks you want it to carry out,” Gandhi says. “Then, using a large language model, each sub-task can be compared against the available actions and objects in the robot’s world, and if any sub-task can’t be carried out because a certain object is not recognized, or an action is not possible, the system can stop right there to ask the user for help.”

    This approach of breaking instructions into sub-tasks also allows her system to understand logical dependencies expressed in English, like, “do task X until event Y happens.” Gandhi uses a dataset of step-by-step instructions across robot task domains like navigation and manipulation, with a focus on household tasks. Using data that are written just the way humans would talk to each other has many advantages, she says, because it means a user can be more flexible about how they phrase their instructions.

    Another of Gandhi’s projects involves developing speech models. In the context of speech recognition, some languages are considered “low resource” since they might not have a lot of transcribed speech available, or might not have a written form at all. “One of the reasons I applied to this internship at the MIT-IBM Watson AI Lab was an interest in language processing for low-resource languages,” she says. “A lot of language models today are very data-driven, and when it’s not that easy to acquire all of that data, that’s when you need to use the limited data efficiently.” 

    Speech is just a stream of sound waves, but humans having a conversation can easily figure out where words and thoughts start and end. In speech processing, both humans and language models use their existing vocabulary to recognize word boundaries and understand the meaning. In low- or no-resource languages, a written vocabulary might not exist at all, so researchers can’t provide one to the model. Instead, the model can make note of what sound sequences occur together more frequently than others, and infer that those might be individual words or concepts. In Gandhi’s research group, these inferred words are then collected into a pseudo-vocabulary that serves as a labeling method for the low-resource language, creating labeled data for further applications.

    The applications for language technology are “pretty much everywhere,” Gandhi says. “You could imagine people being able to interact with software and devices in their native language, their native dialect. You could imagine improving all the voice assistants that we use. You could imagine it being used for translation or interpretation.” More

  • in

    “MIT can give you ‘superpowers’”

    Speaking at the virtual MITx MicroMasters Program Joint Completion Celebration last summer, Diogo da Silva Branco Magalhães described watching a Spider-Man movie with his 8-year-old son and realizing that his son thought MIT was a fictional entity that existed only in the Marvel universe.

    “I had to tell him that MIT also exists in the real world, and that some of the programs are available online for everyone,” says da Silva Branco Magalhães, who earned his credential in the MicroMasters in Statistics and Data Science program. “You don’t need to be a superhero to participate in an MIT program, but MIT can give you ‘superpowers.’ In my case, the superpower that I was looking to acquire was a better understanding of the key technologies that are shaping the future of transportation.

    Part of MIT Open Learning, the MicroMasters programs have drawn in almost 1.4 million learners, spanning nearly every country in the world. More than 7,500 people have earned their credentials across the MicroMasters programs, including: Statistics and Data Science; Supply Chain Management; Data, Economics, and Design of Policy; Principles of Manufacturing; and Finance. 

    Earning his MicroMasters credential not only gave da Silva Branco Magalhães a strong foundation to tackle more complex transportation problems, but it also opened the door to pursuing an accelerated graduate degree via a Northwestern University online program.

    Learners who earn their MicroMasters credentials gain the opportunity to apply to and continue their studies at a pathway school. The MicroMasters in Statistics and Data Science credential can be applied as credit for a master’s program at more than 30 universities, as well as MIT’s PhD Program in Social and Engineering Systems. Da Silva Branco Magalhães, originally from Portugal and now based in Australia, seized this opportunity and enrolled in Northwestern University’s Master’s in Data Science for MIT MicroMasters Credential Holders. 

    The pathway to an enhanced career

    The pathway model launched in 2016 with the MicroMasters in Supply Chain Management. Now, there are over 50 pathway institutions that offer more than 100 different programs for master’s degrees. With pathway institutions located around the world, MicroMasters credential holders can obtain master’s degrees from local residential or virtual programs, at a location convenient to them. They can receive credit for their MicroMasters courses upon acceptance, providing flexibility for online programs and also shortening the time needed on site for residential programs.

    “The pathways expand opportunities for learners, and also help universities attract a broader range of potential students, which can enrich their programs,” says Dana Doyle, senior director for the MicroMasters Program at MIT Open Learning. “This is a tangible way we can achieve our mission of expanding education access.”

    Da Silva Branco Magalhães began the MicroMasters in Statistics and Data Science program in 2020, ultimately completing the program in 2022.

    “After having worked for 20 years in the transportation sector in various roles, I realized I was no longer equipped as a professional to deal with the new technologies that were set to disrupt the mobility sector,” says da Silva Branco Magalhães. “It became clear to me that data and AI were the driving forces behind new products and services such as autonomous vehicles, on-demand transport, or mobility as a service, but I didn’t really understand how data was being used to achieve these outcomes, so I needed to improve my knowledge.”

    July 2023 MicroMasters Program Joint Completion Celebration for SCM, DEDP, PoM, SDS, and FinVideo: MIT Open Learning

    The MicroMasters in Statistics and Data Science was developed by the MIT Institute for Data, Systems, and Society and MITx. Credential holders are required to complete four courses equivalent to graduate-level courses in statistics and data science at MIT and a capstone exam comprising four two-hour proctored exams.

    “The content is world-class,” da Silva Branco Magalhães says of the program. “Even the most complex concepts were explained in a very intuitive way. The exercises and the capstone exam are challenging and stimulating — and MIT-level — which makes this credential highly valuable in the market.”

    Da Silva Branco Magalhães also found the discussion forum very useful, and valued conversations with his colleagues, noting that many of these discussions later continued after completion of the program.

    Gaining analysis and leadership skills

    Now in the Northwestern pathway program, da Silva Branco Magalhães finds that the MicroMasters in Statistics and Data Science program prepared him well for this next step in his studies. The nine-course, accelerated, online master’s program is designed to offer the same depth and rigor of Northwestern’s 12-course MS in Data Science program, aiming to help students build essential analysis and leadership skills that can be directly implemented into the professional realm. Students learn how to make reliable predictions using traditional statistics and machine learning methods.

    Da Silva Branco Magalhães says he has appreciated the remote nature of the Northwestern program, as he started it in France and then completed the first three courses in Australia. He also values the high number of elective courses, allowing students to design the master’s program according to personal preferences and interests.

    “I want to be prepared to meet the challenges and seize the opportunities that AI and data science technologies will bring to the professional realm,” he says. “With this credential, there are no limits to what you can achieve in the field of data science.” More

  • in

    Image recognition accuracy: An unseen challenge confounding today’s AI

    Imagine you are scrolling through the photos on your phone and you come across an image that at first you can’t recognize. It looks like maybe something fuzzy on the couch; could it be a pillow or a coat? After a couple of seconds it clicks — of course! That ball of fluff is your friend’s cat, Mocha. While some of your photos could be understood in an instant, why was this cat photo much more difficult?

    MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers were surprised to find that despite the critical importance of understanding visual data in pivotal areas ranging from health care to transportation to household devices, the notion of an image’s recognition difficulty for humans has been almost entirely ignored. One of the major drivers of progress in deep learning-based AI has been datasets, yet we know little about how data drives progress in large-scale deep learning beyond that bigger is better.

    In real-world applications that require understanding visual data, humans outperform object recognition models despite the fact that models perform well on current datasets, including those explicitly designed to challenge machines with debiased images or distribution shifts. This problem persists, in part, because we have no guidance on the absolute difficulty of an image or dataset. Without controlling for the difficulty of images used for evaluation, it’s hard to objectively assess progress toward human-level performance, to cover the range of human abilities, and to increase the challenge posed by a dataset.

    To fill in this knowledge gap, David Mayo, an MIT PhD student in electrical engineering and computer science and a CSAIL affiliate, delved into the deep world of image datasets, exploring why certain images are more difficult for humans and machines to recognize than others. “Some images inherently take longer to recognize, and it’s essential to understand the brain’s activity during this process and its relation to machine learning models. Perhaps there are complex neural circuits or unique mechanisms missing in our current models, visible only when tested with challenging visual stimuli. This exploration is crucial for comprehending and enhancing machine vision models,” says Mayo, a lead author of a new paper on the work.

    This led to the development of a new metric, the “minimum viewing time” (MVT), which quantifies the difficulty of recognizing an image based on how long a person needs to view it before making a correct identification. Using a subset of ImageNet, a popular dataset in machine learning, and ObjectNet, a dataset designed to test object recognition robustness, the team showed images to participants for varying durations from as short as 17 milliseconds to as long as 10 seconds, and asked them to choose the correct object from a set of 50 options. After over 200,000 image presentation trials, the team found that existing test sets, including ObjectNet, appeared skewed toward easier, shorter MVT images, with the vast majority of benchmark performance derived from images that are easy for humans.

    The project identified interesting trends in model performance — particularly in relation to scaling. Larger models showed considerable improvement on simpler images but made less progress on more challenging images. The CLIP models, which incorporate both language and vision, stood out as they moved in the direction of more human-like recognition.

    “Traditionally, object recognition datasets have been skewed towards less-complex images, a practice that has led to an inflation in model performance metrics, not truly reflective of a model’s robustness or its ability to tackle complex visual tasks. Our research reveals that harder images pose a more acute challenge, causing a distribution shift that is often not accounted for in standard evaluations,” says Mayo. “We released image sets tagged by difficulty along with tools to automatically compute MVT, enabling MVT to be added to existing benchmarks and extended to various applications. These include measuring test set difficulty before deploying real-world systems, discovering neural correlates of image difficulty, and advancing object recognition techniques to close the gap between benchmark and real-world performance.”

    “One of my biggest takeaways is that we now have another dimension to evaluate models on. We want models that are able to recognize any image even if — perhaps especially if — it’s hard for a human to recognize. We’re the first to quantify what this would mean. Our results show that not only is this not the case with today’s state of the art, but also that our current evaluation methods don’t have the ability to tell us when it is the case because standard datasets are so skewed toward easy images,” says Jesse Cummings, an MIT graduate student in electrical engineering and computer science and co-first author with Mayo on the paper.

    From ObjectNet to MVT

    A few years ago, the team behind this project identified a significant challenge in the field of machine learning: Models were struggling with out-of-distribution images, or images that were not well-represented in the training data. Enter ObjectNet, a dataset comprised of images collected from real-life settings. The dataset helped illuminate the performance gap between machine learning models and human recognition abilities, by eliminating spurious correlations present in other benchmarks — for example, between an object and its background. ObjectNet illuminated the gap between the performance of machine vision models on datasets and in real-world applications, encouraging use for many researchers and developers — which subsequently improved model performance.

    Fast forward to the present, and the team has taken their research a step further with MVT. Unlike traditional methods that focus on absolute performance, this new approach assesses how models perform by contrasting their responses to the easiest and hardest images. The study further explored how image difficulty could be explained and tested for similarity to human visual processing. Using metrics like c-score, prediction depth, and adversarial robustness, the team found that harder images are processed differently by networks. “While there are observable trends, such as easier images being more prototypical, a comprehensive semantic explanation of image difficulty continues to elude the scientific community,” says Mayo.

    In the realm of health care, for example, the pertinence of understanding visual complexity becomes even more pronounced. The ability of AI models to interpret medical images, such as X-rays, is subject to the diversity and difficulty distribution of the images. The researchers advocate for a meticulous analysis of difficulty distribution tailored for professionals, ensuring AI systems are evaluated based on expert standards, rather than layperson interpretations.

    Mayo and Cummings are currently looking at neurological underpinnings of visual recognition as well, probing into whether the brain exhibits differential activity when processing easy versus challenging images. The study aims to unravel whether complex images recruit additional brain areas not typically associated with visual processing, hopefully helping demystify how our brains accurately and efficiently decode the visual world.

    Toward human-level performance

    Looking ahead, the researchers are not only focused on exploring ways to enhance AI’s predictive capabilities regarding image difficulty. The team is working on identifying correlations with viewing-time difficulty in order to generate harder or easier versions of images.

    Despite the study’s significant strides, the researchers acknowledge limitations, particularly in terms of the separation of object recognition from visual search tasks. The current methodology does concentrate on recognizing objects, leaving out the complexities introduced by cluttered images.

    “This comprehensive approach addresses the long-standing challenge of objectively assessing progress towards human-level performance in object recognition and opens new avenues for understanding and advancing the field,” says Mayo. “With the potential to adapt the Minimum Viewing Time difficulty metric for a variety of visual tasks, this work paves the way for more robust, human-like performance in object recognition, ensuring that models are truly put to the test and are ready for the complexities of real-world visual understanding.”

    “This is a fascinating study of how human perception can be used to identify weaknesses in the ways AI vision models are typically benchmarked, which overestimate AI performance by concentrating on easy images,” says Alan L. Yuille, Bloomberg Distinguished Professor of Cognitive Science and Computer Science at Johns Hopkins University, who was not involved in the paper. “This will help develop more realistic benchmarks leading not only to improvements to AI but also make fairer comparisons between AI and human perception.” 

    “It’s widely claimed that computer vision systems now outperform humans, and on some benchmark datasets, that’s true,” says Anthropic technical staff member Simon Kornblith PhD ’17, who was also not involved in this work. “However, a lot of the difficulty in those benchmarks comes from the obscurity of what’s in the images; the average person just doesn’t know enough to classify different breeds of dogs. This work instead focuses on images that people can only get right if given enough time. These images are generally much harder for computer vision systems, but the best systems are only a bit worse than humans.”

    Mayo, Cummings, and Xinyu Lin MEng ’22 wrote the paper alongside CSAIL Research Scientist Andrei Barbu, CSAIL Principal Research Scientist Boris Katz, and MIT-IBM Watson AI Lab Principal Researcher Dan Gutfreund. The researchers are affiliates of the MIT Center for Brains, Minds, and Machines.

    The team is presenting their work at the 2023 Conference on Neural Information Processing Systems (NeurIPS). More

  • in

    Three MIT students selected as inaugural MIT-Pillar AI Collective Fellows

    MIT-Pillar AI Collective has announced three inaugural fellows for the fall 2023 semester. With support from the program, the graduate students, who are in their final year of a master’s or PhD program, will conduct research in the areas of artificial intelligence, machine learning, and data science with the aim of commercializing their innovations.

    Launched by MIT’s School of Engineering and Pillar VC in 2022, the MIT-Pillar AI Collective supports faculty, postdocs, and students conducting research on AI, machine learning, and data science. Supported by a gift from Pillar VC and administered by the MIT Deshpande Center for Technological Innovation, the mission of the program is to advance research toward commercialization.

    The fall 2023 MIT-Pillar AI Collective Fellows are:

    Alexander Andonian SM ’21 is a PhD candidate in electrical engineering and computer science whose research interests lie in computer vision, deep learning, and artificial intelligence. More specifically, he is focused on building a generalist, multimodal AI scientist driven by generative vision-language model agents capable of proposing scientific hypotheses, running computational experiments, evaluating supporting evidence, and verifying conclusions in the same way as a human researcher or reviewer. Such an agent could be trained to optimally distill and communicate its findings for human consumption and comprehension. Andonian’s work holds the promise of creating a concrete foundation for rigorously building and holistically testing the next-generation autonomous AI agent for science. In addition to his research, Andonian is the CEO and co-founder of Reelize, a startup that offers a generative AI video tool that effortlessly turns long videos into short clips — and originated from his business coursework and was supported by MIT Sandbox. Andonian is also a founding AI researcher at Poly AI, an early-stage YC-backed startup building AI design tools. Andonian earned an SM from MIT and a BS in neuroscience, physics, and mathematics from Bates College.

    Daniel Magley is a PhD candidate in the Harvard-MIT Program in Health Sciences and Technology who is passionate about making a healthy, fully functioning mind and body a reality for all. His leading-edge research is focused on developing a swallowable wireless thermal imaging capsule that could be used in treating and monitoring inflammatory bowel diseases and their manifestations, such as Crohn’s disease. Providing increased sensitivity and eliminating the need for bowel preparation, the capsule has the potential to vastly improve treatment efficacy and overall patient experience in routine monitoring. The capsule has completed animal studies and is entering human studies at Mass General Brigham, where Magley leads a team of engineers in the hospital’s largest translational research lab, the Tearney Lab. Following the human pilot studies, the largest technological and regulatory risks will be cleared for translation. Magley will then begin focusing on a multi-site study to get the device into clinics, with the promise of benefiting patients across the country. Magley earned a BS in electrical engineering from Caltech.

    Madhumitha Ravichandra is a PhD candidate interested in advancing heat transfer and surface engineering techniques to enhance the safety and performance of nuclear energy systems and reduce their environmental impacts. Leveraging her deep knowledge of the integration of explainable AI with high-throughput autonomous experimentation, she seeks to transform the development of radiation-hardened (rad-hard) sensors, which could potentially withstand and function amidst radiation levels that would render conventional sensors useless. By integrating explainable AI with high-throughput autonomous experimentation, she aims to rapidly iterate designs, test under varied conditions, and ensure that the final product is both robust and transparent in its operations. Her work in this space could shift the paradigm in rad-hard sensor development, addressing a glaring void in the market and redefining standards, ensuring that nuclear and space applications are safer, more efficient, and at the cutting edge of technological progress. Ravichandran earned a BTech in mechanical engineering from SASTRA University, India. More

  • in

    MIT campus goals in food, water, waste support decarbonization efforts

    With the launch of Fast Forward: MIT’s Climate Action Plan for the Decade, the Institute committed to decarbonize campus operations by 2050 — an effort that touches on every corner of MIT, from building energy use to procurement and waste. At the operational level, the plan called for establishing a set of quantitative climate impact goals in the areas of food, water, and waste to inform the campus decarbonization roadmap. After an 18-month process that engaged staff, faculty, and researchers, the goals — as well as high-level strategies to reach them — were finalized in spring 2023.

    The goal development process was managed by a team representing the areas of campus food, water, and waste, respectively, and includes Director of Campus Dining Mark Hayes and Senior Sustainability Project Manager Susy Jones (food), Director of Utilities Janine Helwig (water), Assistant Director of Campus Services Marty O’Brien, and Assistant Director of Sustainability Brain Goldberg (waste) to co-lead the efforts. The group worked together to set goals that leverage ongoing campus sustainability efforts. “It was important for us to collaborate in order to identify the strategies and goals,” explains Goldberg. “It allowed us to set goals that not only align, but build off of one another, enabling us to work more strategically.”

    In setting the goals, each team relied on data, community insight, and best practices. The co-leads are sharing their process to help others at the Institute understand the roles they can play in supporting these objectives.  

    Sustainable food systems

    The primary food impact goal aims for a 25 percent overall reduction in the greenhouse gas footprint of food purchases starting with academic year 2021-22 as a baseline, acknowledging that beef purchases make up a significant share of those emissions. Additionally, the co-leads established a goal to recover all edible food waste in dining hall and retail operations where feasible, as that reduces MIT’s waste impact and acknowledges that redistributing surplus food to feed people is critically important.

    The work to develop the food goal was uniquely challenging, as MIT works with nine different vendors — including main vendor Bon Appetit — to provide food on campus, with many vendors having their own sustainability targets. The goal-setting process began by understanding vendor strategies and leveraging their climate commitments. “A lot of this work is not about reinventing the wheel, but about gathering data,” says Hayes. “We are trying to connect the dots of what is currently happening on campus and to better understand food consumption and waste, ensuring that we area reaching these targets.”

    In identifying ways to reach and exceed these targets, Jones conducted listening sessions around campus, balancing input with industry trends, best-available science, and institutional insight from Hayes. “Before we set these goals and possible strategies, we wanted to get a grounding from the community and understand what would work on our campus,” says Jones, who recently began a joint role that bridges the Office of Sustainability and MIT Dining in part to support the goal work.

    By establishing the 25 percent reduction in the greenhouse gas footprint of food purchases across MIT residential dining menus, Jones and Hayes saw goal-setting as an opportunity to add more sustainable, local, and culturally diverse foods to the menu. “If beef is the most carbon-intensive food on the menu, this enables us to explore and expand so many recipes and menus from around the globe that incorporate alternatives,” Jones says.

    Strategies to reach the climate food goals focus on local suppliers, more plant-forward meals, food recovery, and food security. In 2019, MIT was a co-recipient of the New England Food Vision Prize provided by the Kendall Foundation to increase the amount of local food served on campus in partnership with CommonWealth Kitchen in Dorchester. While implementation of that program was put on pause due to the pandemic, work resumed this year. Currently, the prize is funding a collaborative effort to introduce falafel-like, locally manufactured fritters made from Maine-grown yellow field peas to dining halls at MIT and other university campuses, exemplifying the efforts to meet the climate impact goal, serve as a model for others, and provide demonstrable ways of strengthening the regional food system.

    “This sort of innovation is where we’re a leader,” says Hayes. “In addition to the Kendall Prize, we are looking to focus on food justice, growing our BIPOC [Black, Indigenous, and people of color] vendors, and exploring ideas such as local hydroponic and container vegetable growing companies, and how to scale these types of products into institutional settings.”

    Reduce and reuse for campus water

    The 2030 water impact goal aims to achieve a 10 percent reduction in water use compared to the 2019 baseline and to update the water reduction goal to align with the new metering program and proposed campus decarbonization plans as they evolve.

    When people think of campus water use, they may think of sprinklers, lab sinks, or personal use like drinking water and showers. And while those uses make up around 60 percent of campus water use, the Central Utilities Plant (CUP) accounts for the remaining 40 percent. “The CUP generates electricity and delivers heating and cooling to the campus through steam and chilled water — all using what amounts to a large percentage of water use on campus,” says Helwig. As such, the water goal focuses as much on reuse as reduction, with one approach being to expand water capture from campus cooling towers for reuse in CUP operations. “People often think of water use and energy separately, but they often go hand-in-hand,” Helwig explains.

    Data also play a central part in the water impact goal — that’s why a new metering program is called for in the implementation strategy. “We have access to a lot of data at MIT, but in reviewing the water data to inform the goal, we learned that it wasn’t quite where we needed it,” explains Helwig. “By ensuring we have the right meter and submeters set up, we can better set boundaries to understand where there is the potential to reduce water use.” Irrigation on campus is one such target with plans to soon release new campuswide landscaping standards that minimize water use.

    Reducing campus waste

    The waste impact goal aims to reduce campus trash by 30 percent compared to 2019 baseline totals. Additionally, the goal outlines efforts to improve the accuracy of indicators tracking campus waste; reduce the percentage of food scraps in trash and percent of recycling in trash in select locations; reduce the percentage of trash and recycling comprised of single use items; and increase the percentage of residence halls and other campus spaces where food is consumed at scale, implementing an MIT food scrap collection program.

    In setting the waste goals, Goldberg and O’Brien studied available campus waste data from past waste audits, pilot programs, and MIT’s waste haulers. They factored in state and city policies that regulate things like the type and amount of waste large institutions can transport. “Looking at all the data it became clear that a 30 percent trash reduction goal will make a tremendous impact on campus and help us drive toward the goal of completely designing out waste from campus,” Goldberg says. The strategies to reach the goals include reducing the amount of materials that come into campus, increasing recycling rates, and expanding food waste collection on campus.

    While reducing the waste created from material sources is outlined in the goals, food waste is a special focus on campus because it comprises approximately 40 percent of campus trash, it can be easily collected separately from trash and recycled locally, and decomposing food waste is one of the largest sources of greenhouse gas emissions found in landfills. “There is a lot of greenhouse gas emissions that result from production, distribution, transportation, packaging, processing, and disposal of food,” explains Goldberg. “When food travels to campus, is removed from campus as waste, and then breaks down in a landfill, there are emissions every step of the way.”

    To reduce food waste, Goldberg and O’Brien outlined strategies that include working with campus suppliers to identify ordering volumes and practices to limit waste. Once materials are on campus, another strategy kicks in, with a new third stream of waste collection that joins recycling and trash — food waste. By collecting the food waste separately — in bins that are currently rolling out across campus — the waste can be reprocessed into fertilizer, compost, and/or energy without the off-product of greenhouse gases. The waste impact goal also relies on behavioral changes to reduce waste, with education materials part of the process to reduce waste and decontaminate reprocessing streams.

    Tracking progress

    As work toward the goals advances, community members can monitor progress in the Sustainability DataPool Material Matters and Campus Water Use dashboards, or explore the Impact Goals in depth.

    “From food to water to waste, everyone on campus interacts with these systems and can grapple with their impact either from a material they need to dispose of, to water they’re using in a lab, or leftover food from an event,” says Goldberg. “By setting these goals we as an institution can lead the way and help our campus community understand how they can play a role, plug in, and make an impact.” More

  • in

    Automated system teaches users when to collaborate with an AI assistant

    Artificial intelligence models that pick out patterns in images can often do so better than human eyes — but not always. If a radiologist is using an AI model to help her determine whether a patient’s X-rays show signs of pneumonia, when should she trust the model’s advice and when should she ignore it?

    A customized onboarding process could help this radiologist answer that question, according to researchers at MIT and the MIT-IBM Watson AI Lab. They designed a system that teaches a user when to collaborate with an AI assistant.

    In this case, the training method might find situations where the radiologist trusts the model’s advice — except she shouldn’t because the model is wrong. The system automatically learns rules for how she should collaborate with the AI, and describes them with natural language.

    During onboarding, the radiologist practices collaborating with the AI using training exercises based on these rules, receiving feedback about her performance and the AI’s performance.

    The researchers found that this onboarding procedure led to about a 5 percent improvement in accuracy when humans and AI collaborated on an image prediction task. Their results also show that just telling the user when to trust the AI, without training, led to worse performance.

    Importantly, the researchers’ system is fully automated, so it learns to create the onboarding process based on data from the human and AI performing a specific task. It can also adapt to different tasks, so it can be scaled up and used in many situations where humans and AI models work together, such as in social media content moderation, writing, and programming.

    “So often, people are given these AI tools to use without any training to help them figure out when it is going to be helpful. That’s not what we do with nearly every other tool that people use — there is almost always some kind of tutorial that comes with it. But for AI, this seems to be missing. We are trying to tackle this problem from a methodological and behavioral perspective,” says Hussein Mozannar, a graduate student in the Social and Engineering Systems doctoral program within the Institute for Data, Systems, and Society (IDSS) and lead author of a paper about this training process.

    The researchers envision that such onboarding will be a crucial part of training for medical professionals.

    “One could imagine, for example, that doctors making treatment decisions with the help of AI will first have to do training similar to what we propose. We may need to rethink everything from continuing medical education to the way clinical trials are designed,” says senior author David Sontag, a professor of EECS, a member of the MIT-IBM Watson AI Lab and the MIT Jameel Clinic, and the leader of the Clinical Machine Learning Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

    Mozannar, who is also a researcher with the Clinical Machine Learning Group, is joined on the paper by Jimin J. Lee, an undergraduate in electrical engineering and computer science; Dennis Wei, a senior research scientist at IBM Research; and Prasanna Sattigeri and Subhro Das, research staff members at the MIT-IBM Watson AI Lab. The paper will be presented at the Conference on Neural Information Processing Systems.

    Training that evolves

    Existing onboarding methods for human-AI collaboration are often composed of training materials produced by human experts for specific use cases, making them difficult to scale up. Some related techniques rely on explanations, where the AI tells the user its confidence in each decision, but research has shown that explanations are rarely helpful, Mozannar says.

    “The AI model’s capabilities are constantly evolving, so the use cases where the human could potentially benefit from it are growing over time. At the same time, the user’s perception of the model continues changing. So, we need a training procedure that also evolves over time,” he adds.

    To accomplish this, their onboarding method is automatically learned from data. It is built from a dataset that contains many instances of a task, such as detecting the presence of a traffic light from a blurry image.

    The system’s first step is to collect data on the human and AI performing this task. In this case, the human would try to predict, with the help of AI, whether blurry images contain traffic lights.

    The system embeds these data points onto a latent space, which is a representation of data in which similar data points are closer together. It uses an algorithm to discover regions of this space where the human collaborates incorrectly with the AI. These regions capture instances where the human trusted the AI’s prediction but the prediction was wrong, and vice versa.

    Perhaps the human mistakenly trusts the AI when images show a highway at night.

    After discovering the regions, a second algorithm utilizes a large language model to describe each region as a rule, using natural language. The algorithm iteratively fine-tunes that rule by finding contrasting examples. It might describe this region as “ignore AI when it is a highway during the night.”

    These rules are used to build training exercises. The onboarding system shows an example to the human, in this case a blurry highway scene at night, as well as the AI’s prediction, and asks the user if the image shows traffic lights. The user can answer yes, no, or use the AI’s prediction.

    If the human is wrong, they are shown the correct answer and performance statistics for the human and AI on these instances of the task. The system does this for each region, and at the end of the training process, repeats the exercises the human got wrong.

    “After that, the human has learned something about these regions that we hope they will take away in the future to make more accurate predictions,” Mozannar says.

    Onboarding boosts accuracy

    The researchers tested this system with users on two tasks — detecting traffic lights in blurry images and answering multiple choice questions from many domains (such as biology, philosophy, computer science, etc.).

    They first showed users a card with information about the AI model, how it was trained, and a breakdown of its performance on broad categories. Users were split into five groups: Some were only shown the card, some went through the researchers’ onboarding procedure, some went through a baseline onboarding procedure, some went through the researchers’ onboarding procedure and were given recommendations of when they should or should not trust the AI, and others were only given the recommendations.

    Only the researchers’ onboarding procedure without recommendations improved users’ accuracy significantly, boosting their performance on the traffic light prediction task by about 5 percent without slowing them down. However, onboarding was not as effective for the question-answering task. The researchers believe this is because the AI model, ChatGPT, provided explanations with each answer that convey whether it should be trusted.

    But providing recommendations without onboarding had the opposite effect — users not only performed worse, they took more time to make predictions.

    “When you only give someone recommendations, it seems like they get confused and don’t know what to do. It derails their process. People also don’t like being told what to do, so that is a factor as well,” Mozannar says.

    Providing recommendations alone could harm the user if those recommendations are wrong, he adds. With onboarding, on the other hand, the biggest limitation is the amount of available data. If there aren’t enough data, the onboarding stage won’t be as effective, he says.

    In the future, he and his collaborators want to conduct larger studies to evaluate the short- and long-term effects of onboarding. They also want to leverage unlabeled data for the onboarding process, and find methods to effectively reduce the number of regions without omitting important examples.

    “People are adopting AI systems willy-nilly, and indeed AI offers great potential, but these AI agents still sometimes makes mistakes. Thus, it’s crucial for AI developers to devise methods that help humans know when it’s safe to rely on the AI’s suggestions,” says Dan Weld, professor emeritus at the Paul G. Allen School of Computer Science and Engineering at the University of Washington, who was not involved with this research. “Mozannar et al. have created an innovative method for identifying situations where the AI is trustworthy, and (importantly) to describe them to people in a way that leads to better human-AI team interactions.”

    This work is funded, in part, by the MIT-IBM Watson AI Lab. More