More stories

  • in

    Learning the language of molecules to predict their properties

    Discovering new materials and drugs typically involves a manual, trial-and-error process that can take decades and cost millions of dollars. To streamline this process, scientists often use machine learning to predict molecular properties and narrow down the molecules they need to synthesize and test in the lab.

    Researchers from MIT and the MIT-Watson AI Lab have developed a new, unified framework that can simultaneously predict molecular properties and generate new molecules much more efficiently than these popular deep-learning approaches.

    To teach a machine-learning model to predict a molecule’s biological or mechanical properties, researchers must show it millions of labeled molecular structures — a process known as training. Due to the expense of discovering molecules and the challenges of hand-labeling millions of structures, large training datasets are often hard to come by, which limits the effectiveness of machine-learning approaches.

    By contrast, the system created by the MIT researchers can effectively predict molecular properties using only a small amount of data. Their system has an underlying understanding of the rules that dictate how building blocks combine to produce valid molecules. These rules capture the similarities between molecular structures, which helps the system generate new molecules and predict their properties in a data-efficient manner.

    This method outperformed other machine-learning approaches on both small and large datasets, and was able to accurately predict molecular properties and generate viable molecules when given a dataset with fewer than 100 samples.

    “Our goal with this project is to use some data-driven methods to speed up the discovery of new molecules, so you can train a model to do the prediction without all of these cost-heavy experiments,” says lead author Minghao Guo, a computer science and electrical engineering (EECS) graduate student.

    Guo’s co-authors include MIT-IBM Watson AI Lab research staff members Veronika Thost, Payel Das, and Jie Chen; recent MIT graduates Samuel Song ’23 and Adithya Balachandran ’23; and senior author Wojciech Matusik, a professor of electrical engineering and computer science and a member of the MIT-IBM Watson AI Lab, who leads the Computational Design and Fabrication Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the International Conference for Machine Learning.

    Learning the language of molecules

    To achieve the best results with machine-learning models, scientists need training datasets with millions of molecules that have similar properties to those they hope to discover. In reality, these domain-specific datasets are usually very small. So, researchers use models that have been pretrained on large datasets of general molecules, which they apply to a much smaller, targeted dataset. However, because these models haven’t acquired much domain-specific knowledge, they tend to perform poorly.

    The MIT team took a different approach. They created a machine-learning system that automatically learns the “language” of molecules — what is known as a molecular grammar — using only a small, domain-specific dataset. It uses this grammar to construct viable molecules and predict their properties.

    In language theory, one generates words, sentences, or paragraphs based on a set of grammar rules. You can think of a molecular grammar the same way. It is a set of production rules that dictate how to generate molecules or polymers by combining atoms and substructures.

    Just like a language grammar, which can generate a plethora of sentences using the same rules, one molecular grammar can represent a vast number of molecules. Molecules with similar structures use the same grammar production rules, and the system learns to understand these similarities.

    Since structurally similar molecules often have similar properties, the system uses its underlying knowledge of molecular similarity to predict properties of new molecules more efficiently. 

    “Once we have this grammar as a representation for all the different molecules, we can use it to boost the process of property prediction,” Guo says.

    The system learns the production rules for a molecular grammar using reinforcement learning — a trial-and-error process where the model is rewarded for behavior that gets it closer to achieving a goal.

    But because there could be billions of ways to combine atoms and substructures, the process to learn grammar production rules would be too computationally expensive for anything but the tiniest dataset.

    The researchers decoupled the molecular grammar into two parts. The first part, called a metagrammar, is a general, widely applicable grammar they design manually and give the system at the outset. Then it only needs to learn a much smaller, molecule-specific grammar from the domain dataset. This hierarchical approach speeds up the learning process.

    Big results, small datasets

    In experiments, the researchers’ new system simultaneously generated viable molecules and polymers, and predicted their properties more accurately than several popular machine-learning approaches, even when the domain-specific datasets had only a few hundred samples. Some other methods also required a costly pretraining step that the new system avoids.

    The technique was especially effective at predicting physical properties of polymers, such as the glass transition temperature, which is the temperature required for a material to transition from solid to liquid. Obtaining this information manually is often extremely costly because the experiments require extremely high temperatures and pressures.

    To push their approach further, the researchers cut one training set down by more than half — to just 94 samples. Their model still achieved results that were on par with methods trained using the entire dataset.

    “This grammar-based representation is very powerful. And because the grammar itself is a very general representation, it can be deployed to different kinds of graph-form data. We are trying to identify other applications beyond chemistry or material science,” Guo says.

    In the future, they also want to extend their current molecular grammar to include the 3D geometry of molecules and polymers, which is key to understanding the interactions between polymer chains. They are also developing an interface that would show a user the learned grammar production rules and solicit feedback to correct rules that may be wrong, boosting the accuracy of the system.

    This work is funded, in part, by the MIT-IBM Watson AI Lab and its member company, Evonik. More

  • in

    Educating national security leaders on artificial intelligence

    Understanding artificial intelligence and how it relates to matters of national security has become a top priority for military and government leaders in recent years. A new three-day custom program entitled “Artificial Intelligence for National Security Leaders” — AI4NSL for short — aims to educate leaders who may not have a technical background on the basics of AI, machine learning, and data science, and how these topics intersect with national security.

    “National security fundamentally is about two things: getting information out of sensors and processing that information. These are two things that AI excels at. The AI4NSL class engages national security leaders in understanding how to navigate the benefits and opportunities that AI affords, while also understanding its potential negative consequences,” says Aleksander Madry, the Cadence Design Systems Professor at MIT and one of the course’s faculty directors.

    Organized jointly by MIT’s School of Engineering, MIT Stephen A. Schwarzman College of Computing, and MIT Sloan Executive Education, AI4NSL wrapped up its fifth cohort in April. The course brings leaders from every branch of the U.S. military, as well as some foreign military leaders from NATO, to MIT’s campus, where they learn from faculty experts on a variety of technical topics in AI, as well as how to navigate organizational challenges that arise in this context.

    Play video

    AI for National Security Leaders | MIT Sloan Executive Education

    “We set out to put together a real executive education class on AI for senior national security leaders,” says Madry. “For three days, we are teaching these leaders not only an understanding of what this technology is about, but also how to best adopt these technologies organizationally.”

    The original idea sprang from discussions with senior U.S. Air Force (USAF) leaders and members of the Department of the Air Force (DAF)-MIT AI Accelerator in 2019.

    According to Major John Radovan, deputy director of the DAF-MIT AI Accelerator, in recent years it has become clear that national security leaders needed a deeper understanding of AI technologies and its implications on security, warfare, and military operations. In February 2020, Radovan and his team at the DAF-MIT AI Accelerator started building a custom course to help guide senior leaders in their discussions about AI.

    “This is the only course out there that is focused on AI specifically for national security,” says Radovan. “We didn’t want to make this course just for members of the Air Force — it had to be for all branches of the military. If we are going to operate as a joint force, we need to have the same vocabulary and the same mental models about how to use this technology.”

    After a pilot program in collaboration with MIT Open Learning and the MIT Computer Science and Artificial Intelligence Laboratory, Radovan connected with faculty at the School of Engineering and MIT Schwarzman College of Computing, including Madry, to refine the course’s curriculum. They enlisted the help of colleagues and faculty at MIT Sloan Executive Education to refine the class’s curriculum and cater the content to its audience. The result of this cross-school collaboration was a new iteration of AI4NSL, which was launched last summer.

    In addition to providing participants with a basic overview of AI technologies, the course places a heavy emphasis on organizational planning and implementation.

    “What we wanted to do was to create smart consumers at the command level. The idea was to present this content at a higher level so that people could understand the key frameworks, which will guide their thinking around the use and adoption of this material,” says Roberto Fernandez, the William F. Pounds Professor of Management and one of the AI4NSL instructors, as well as the other course’s faculty director.

    During the three-day course, instructors from MIT’s Department of Electrical Engineering and Computer Science, Department of Aeronautics and Astronautics, and MIT Sloan School of Management cover a wide range of topics.

    The first half of the course starts with a basic overview of concepts including AI, machine learning, deep learning, and the role of data. Instructors also present the problems and pitfalls of using AI technologies, including the potential for adversarial manipulation of machine learning systems, privacy challenges, and ethical considerations.

    In the middle of day two, the course shifts to examine the organizational perspective, encouraging participants to consider how to effectively implement these technologies in their own units.

    “What’s exciting about this course is the way it is formatted first in terms of understanding AI, machine learning, what data is, and how data feeds AI, and then giving participants a framework to go back to their units and build a strategy to make this work,” says Colonel Michelle Goyette, director of the Army Strategic Education Program at the Army War College and an AI4NSL participant.

    Throughout the course, breakout sessions provide participants with an opportunity to collaborate and problem-solve on an exercise together. These breakout sessions build upon one another as the participants are exposed to new concepts related to AI.

    “The breakout sessions have been distinctive because they force you to establish relationships with people you don’t know, so the networking aspect is key. Any time you can do more than receive information and actually get into the application of what you were taught, that really enhances the learning environment,” says Lieutenant General Brian Robinson, the commander of Air Education and Training Command for the USAF and an AI4NSL participant.

    This spirit of teamwork, collaboration, and bringing together individuals from different backgrounds permeates the three-day program. The AI4NSL classroom not only brings together national security leaders from all branches of the military, it also brings together faculty from three schools across MIT.

    “One of the things that’s most exciting about this program is the kind of overarching theme of collaboration,” says Rob Dietel, director of executive programs at Sloan School of Management. “We’re not drawing just from the MIT Sloan faculty, we’re bringing in top faculty from the Schwarzman College of Computing and the School of Engineering. It’s wonderful to be able to tap into those resources that are here on MIT’s campus to really make it the most impactful program that we can.”

    As new developments in generative AI, such as ChatGPT, and machine learning alter the national security landscape, the organizers at AI4NSL will continue to update the curriculum to ensure it is preparing leaders to understand the implications for their respective units.

    “The rate of change for AI and national security is so fast right now that it’s challenging to keep up, and that’s part of the reason we’ve designed this program. We’ve brought in some of our world-class faculty from different parts of MIT to really address the changing dynamic of AI,” adds Dietel. More

  • in

    Researchers teach an AI to write better chart captions

    Chart captions that explain complex trends and patterns are important for improving a reader’s ability to comprehend and retain the data being presented. And for people with visual disabilities, the information in a caption often provides their only means of understanding the chart.

    But writing effective, detailed captions is a labor-intensive process. While autocaptioning techniques can alleviate this burden, they often struggle to describe cognitive features that provide additional context.

    To help people author high-quality chart captions, MIT researchers have developed a dataset to improve automatic captioning systems. Using this tool, researchers could teach a machine-learning model to vary the level of complexity and type of content included in a chart caption based on the needs of users.

    The MIT researchers found that machine-learning models trained for autocaptioning with their dataset consistently generated captions that were precise, semantically rich, and described data trends and complex patterns. Quantitative and qualitative analyses revealed that their models captioned charts more effectively than other autocaptioning systems.  

    The team’s goal is to provide the dataset, called VisText, as a tool researchers can use as they work on the thorny problem of chart autocaptioning. These automatic systems could help provide captions for uncaptioned online charts and improve accessibility for people with visual disabilities, says co-lead author Angie Boggust, a graduate student in electrical engineering and computer science at MIT and member of the Visualization Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

    “We’ve tried to embed a lot of human values into our dataset so that when we and other researchers are building automatic chart-captioning systems, we don’t end up with models that aren’t what people want or need,” she says.

    Boggust is joined on the paper by co-lead author and fellow graduate student Benny J. Tang and senior author Arvind Satyanarayan, associate professor of computer science at MIT who leads the Visualization Group in CSAIL. The research will be presented at the Annual Meeting of the Association for Computational Linguistics.

    Human-centered analysis

    The researchers were inspired to develop VisText from prior work in the Visualization Group that explored what makes a good chart caption. In that study, researchers found that sighted users and blind or low-vision users had different preferences for the complexity of semantic content in a caption. 

    The group wanted to bring that human-centered analysis into autocaptioning research. To do that, they developed VisText, a dataset of charts and associated captions that could be used to train machine-learning models to generate accurate, semantically rich, customizable captions.

    Developing effective autocaptioning systems is no easy task. Existing machine-learning methods often try to caption charts the way they would an image, but people and models interpret natural images differently from how we read charts. Other techniques skip the visual content entirely and caption a chart using its underlying data table. However, such data tables are often not available after charts are published.

    Given the shortfalls of using images and data tables, VisText also represents charts as scene graphs. Scene graphs, which can be extracted from a chart image, contain all the chart data but also include additional image context.

    “A scene graph is like the best of both worlds — it contains almost all the information present in an image while being easier to extract from images than data tables. As it’s also text, we can leverage advances in modern large language models for captioning,” Tang explains.

    They compiled a dataset that contains more than 12,000 charts — each represented as a data table, image, and scene graph — as well as associated captions. Each chart has two separate captions: a low-level caption that describes the chart’s construction (like its axis ranges) and a higher-level caption that describes statistics, relationships in the data, and complex trends.

    The researchers generated low-level captions using an automated system and crowdsourced higher-level captions from human workers.

    “Our captions were informed by two key pieces of prior research: existing guidelines on accessible descriptions of visual media and a conceptual model from our group for categorizing semantic content. This ensured that our captions featured important low-level chart elements like axes, scales, and units for readers with visual disabilities, while retaining human variability in how captions can be written,” says Tang.

    Translating charts

    Once they had gathered chart images and captions, the researchers used VisText to train five machine-learning models for autocaptioning. They wanted to see how each representation — image, data table, and scene graph — and combinations of the representations affected the quality of the caption.

    “You can think about a chart captioning model like a model for language translation. But instead of saying, translate this German text to English, we are saying translate this ‘chart language’ to English,” Boggust says.

    Their results showed that models trained with scene graphs performed as well or better than those trained using data tables. Since scene graphs are easier to extract from existing charts, the researchers argue that they might be a more useful representation.

    They also trained models with low-level and high-level captions separately. This technique, known as semantic prefix tuning, enabled them to teach the model to vary the complexity of the caption’s content.

    In addition, they conducted a qualitative examination of captions produced by their best-performing method and categorized six types of common errors. For instance, a directional error occurs if a model says a trend is decreasing when it is actually increasing.

    This fine-grained, robust qualitative evaluation was important for understanding how the model was making its errors. For example, using quantitative methods, a directional error might incur the same penalty as a repetition error, where the model repeats the same word or phrase. But a directional error could be more misleading to a user than a repetition error. The qualitative analysis helped them understand these types of subtleties, Boggust says.

    These sorts of errors also expose limitations of current models and raise ethical considerations that researchers must consider as they work to develop autocaptioning systems, she adds.

    Generative machine-learning models, such as those that power ChatGPT, have been shown to hallucinate or give incorrect information that can be misleading. While there is a clear benefit to using these models for autocaptioning existing charts, it could lead to the spread of misinformation if charts are captioned incorrectly.

    “Maybe this means that we don’t just caption everything in sight with AI. Instead, perhaps we provide these autocaptioning systems as authorship tools for people to edit. It is important to think about these ethical implications throughout the research process, not just at the end when we have a model to deploy,” she says.

    Boggust, Tang, and their colleagues want to continue optimizing the models to reduce some common errors. They also want to expand the VisText dataset to include more charts, and more complex charts, such as those with stacked bars or multiple lines. And they would also like to gain insights into what these autocaptioning models are actually learning about chart data.

    This research was supported, in part, by a Google Research Scholar Award, the National Science Foundation, the MLA@CSAIL Initiative, and the United States Air Force Research Laboratory. More

  • in

    MIT-Pillar AI Collective announces first seed grant recipients

    The MIT-Pillar AI Collective has announced its first six grant recipients. Students, alumni, and postdocs working on a broad range of topics in artificial intelligence, machine learning, and data science will receive funding and support for research projects that could translate into commercially viable products or companies. These grants are intended to help students explore commercial applications for their research, and eventually drive that commercialization through the creation of a startup.

    “These tremendous students and postdocs are working on projects that have the potential to be truly transformative across a diverse range of industries. It’s thrilling to think that the novel research these teams are conducting could lead to the founding of startups that revolutionize everything from drug delivery to video conferencing,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

    Launched in September 2022, the MIT-Pillar AI Collective is a pilot program funded by a $1 million gift from Pillar VC that aims to cultivate prospective entrepreneurs and drive innovation in areas related to AI. Administered by the MIT Deshpande Center for Technological Innovation, the AI Collective centers on the market discovery process, advancing projects through market research, customer discovery, and prototyping. Graduate students and postdocs supported by the program work toward the development of minimum viable products.

    “In addition to funding, the MIT-Pillar AI Collective provides grant recipients with mentorship and guidance. With the rapid advancement of AI technologies, this type of support is critical to ensure students and postdocs are able to access the resources required to move quickly in this fast-pace environment,” says Jinane Abounadi, managing director of the MIT-Pillar AI Collective.

    The six inaugural recipients will receive support in identifying key milestones and advice from experienced entrepreneurs. The AI Collective assists seed grant recipients in gathering feedback from potential end-users, as well as getting insights from early-stage investors. The program also organizes community events, including a “Founder Talks” speaker series, and other team-building activities.   

    “Each one of these grant recipients exhibits an entrepreneurial spirit. It is exciting to provide support and guidance as they start a journey that could one day see them as founders and leaders of successful companies,” adds Jamie Goldstein ’89, founder of Pillar VC.

    The first cohort of grant recipients include the following projects:

    Predictive query interface

    Abdullah Alomar SM ’21, a PhD candidate studying electrical engineering and computer science, is building a predictive query interface for time series databases to better forecast demand and financial data. This user-friendly interface can help alleviate some of the bottlenecks and issues related to unwieldy data engineering processes while providing state-of-the-art statistical accuracy. Alomar is advised by Devavrat Shah, the Andrew (1956) and Erna Viterbi Professor at MIT.

    Design of light-activated drugs

    Simon Axelrod, a PhD candidate studying chemical physics at Harvard University, is combining AI with physics simulations to design light-activated drugs that could reduce side effects and improve effectiveness. Patients would receive an inactive form of a drug, which is then activated by light in a specific area of the body containing diseased tissue. This localized use of photoactive drugs would minimize the side effects from drugs targeting healthy cells. Axelrod is developing novel computational models that predict properties of photoactive drugs with high speed and accuracy, allowing researchers to focus on only the highest-quality drug candidates. He is advised by Rafael Gomez-Bombarelli, the Jeffrey Cheah Career Development Chair in Engineering in the MIT Department of Materials Science and Engineering. 

    Low-cost 3D perception

    Arjun Balasingam, a PhD student in electrical engineering and computer science and a member of the Computer Science and Artificial Intelligence Laboratory’s (CSAIL) Networks and Mobile Systems group, is developing a technology, called MobiSee, that enables real-time 3D reconstruction in challenging dynamic environments. MobiSee uses self-supervised AI methods along with video and lidar to provide low-cost, state-of-the-art 3D perception on consumer mobile devices like smartphones. This technology could have far-reaching applications across mixed reality, navigation, safety, and sports streaming, in addition to unlocking opportunities for new real-time and immersive experiences. He is advised by Hari Balakrishnan, the Fujitsu Professor of Computer Science and Artificial Intelligence at MIT and member of CSAIL.

    Sleep therapeutics

    Guillermo Bernal SM ’14, PhD ’23, a recent PhD graduate in media arts and sciences, is developing a sleep therapeutic platform that would enable sleep specialists and researchers to conduct robust sleep studies and develop therapy plans remotely, while the patient is comfortable in their home. Called Fascia, the three-part system consists of a polysomnogram with a sleep mask form factor that collects data, a hub that enables researchers to provide stimulation and feedback via olfactory, auditory, and visual stimuli, and a web portal that enables researchers to read a patient’s signals in real time with machine learning analysis. Bernal was advised by Pattie Maes, professor of media arts and sciences at the MIT Media Lab.

    Autonomous manufacturing assembly with human-like tactile perception

    Michael Foshey, a mechanical engineer and project manager with MIT CSAIL’s Computational Design and Fabrication Group, is developing an AI-enabled tactile perception system that can be used to give robots human-like dexterity. With this new technology platform, Foshey and his team hope to enable industry-changing applications in manufacturing. Currently, assembly tasks in manufacturing are largely done by hand and are typically repetitive and tedious. As a result, these jobs are being largely left unfilled. These labor shortages can cause supply chain shortages and increases in the cost of production. Foshey’s new technology platform aims to address this by automating assembly tasks to reduce reliance on manual labor. Foshey is supervised by Wojciech Matusik, MIT professor of electrical engineering and computer science and member of CSAIL.  

    Generative AI for video conferencing

    Vibhaalakshmi Sivaraman SM ’19, a PhD candidate in electrical engineering and computer science who is a member of CSAIL’s Networking and Mobile Systems Group, is developing a generative technology, Gemino, to facilitate video conferencing in high-latency and low-bandwidth network environments. Gemino is a neural compression system for video conferencing that overcomes the robustness concerns and compute complexity challenges that limit current face-image-synthesis models. This technology could enable sustained video conferencing calls in regions and scenarios that cannot reliably support video calls today. Sivaraman is advised by Mohammad Alizadeh, MIT associate professor of electrical engineering and computer science and member of CSAIL.  More

  • in

    Novo Nordisk to support MIT postdocs working at the intersection of AI and life sciences

    MIT’s School of Engineering and global health care company Novo Nordisk has announced the launch of a multi-year program to support postdoctoral fellows conducting research at the intersection of artificial intelligence and data science with life sciences. The MIT-Novo Nordisk Artificial Intelligence Postdoctoral Fellows Program will welcome its first cohort of up to 10 postdocs for a two-year term this fall. The program will provide up to $10 million for an annual cohort of up to 10 postdoc for two-year terms.

    “The research being conducted at the intersection of AI and life sciences has the potential to transform health care as we know it,” says Anantha Chandrakasan, dean of the School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “I am thrilled that the MIT-Novo Nordisk Program will support early-career researchers who work in this space.”

    The launch of the MIT-Novo Nordisk Program coincides with the 100th anniversary celebration of Novo Nordisk. The company was founded in 1923 and treated its first patients with insulin, which had recently been discovered in March of that year.

    “The use of AI in the health care industry presents a massive opportunity to improve the lives of people living with chronic diseases,” says Thomas Senderovitz, senior vice president for data science at Novo Nordisk. “Novo Nordisk is committed to the development of new, innovative solutions, and MIT hosts some of the most outstanding researchers in the field. We are therefore excited to support postdocs working on the cutting edge of AI and life sciences.”

    The MIT-Novo Nordisk Program will support postdocs advancing the use of AI in life science and health. Postdocs will join an annual cohort that participates in frequent events and gatherings. The cohort will meet regularly to exchange ideas about their work and discuss ways to amplify their impact.

    “We are excited to welcome postdocs working on AI, data science, health, and life sciences — research areas of strategic importance across MIT,” adds Chandrakasan.

    A central focus of the program will be offering postdocs professional development and mentorship opportunities. Fellows will be invited to entrepreneurship-focused workshops that enable them to learn from company founders, venture capitalists, and other entrepreneurial leaders. Fellows will also have the opportunity to receive mentorship from experts in life sciences and data science.

    “MIT is always exploring opportunities to innovate and enhance the postdoctoral experience,” adds MIT Provost Cynthia Barnhart. “The MIT-Novo Nordisk Program has been thoughtfully designed to introduce fellows to a wealth of experiences, skill sets, and perspectives that support their professional growth while prioritizing a sense of community with their cohort.”

    Angela Belcher, head of the Department of Biological Engineering, the James Mason Crafts Professor of Biological Engineering and Materials Science, and member of the Koch Institute for Integrative Cancer Research, and Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science, will serve as co-faculty leads for the program.

    The new program complements a separate postdoctoral fellowship program at MIT supported by the Novo Nordisk Foundation that focuses on enabling interdisciplinary research. More

  • in

    Q&A: Are far-reaching fires the new normal?

    Where there’s smoke, there is fire. But with climate change, larger and longer-burning wildfires are sending smoke farther from their source, often to places that are unaccustomed to the exposure. That’s been the case this week, as smoke continues to drift south from massive wildfires in Canada, prompting warnings of hazardous air quality, and poor visibility in states across New England, the mid-Atlantic, and the Midwest.

    As wildfire season is just getting going, many may be wondering: Are the air-polluting effects of wildfires a new normal?

    MIT News spoke with Professor Colette Heald of the Department of Civil and Environmental Engineering and the Department of Earth, Atmospheric and Planetary Sciences, and Professor Noelle Selin of the Institute for Data, Systems and Society and the Department of Earth, Atmospheric and Planetary Sciences. Heald specializes in atmospheric chemistry and has studied the climate and health effects associated with recent wildfires, while Selin works with atmospheric models to track air pollutants around the world, which she uses to inform policy decisions on mitigating  pollution and climate change. The researchers shared some of their insights on the immediate impacts of Canada’s current wildfires and what downwind regions may expect in the coming months, as the wildfire season stretches into summer.  

    Q: What role has climate change and human activity played in the wildfires we’ve seen so far this year?

    Heald: Unusually warm and dry conditions have dramatically increased fire susceptibility in Canada this year. Human-induced climate change makes such dry and warm conditions more likely. Smoke from fires in Alberta and Nova Scotia in May, and Quebec in early June, has led to some of the worst air quality conditions measured locally in Canada. This same smoke has been transported into the United States and degraded air quality here as well. Local officials have determined that ignitions have been associated with lightning strikes, but human activity has also played a role igniting some of the fires in Alberta.

    Q: What can we expect for the coming months in terms of the pattern of wildfires and their associated air pollution across the United States?

    Heald: The Government of Canada is projecting higher-than-normal fire activity throughout the 2023 fire season. Fire susceptibility will continue to respond to changing weather conditions, and whether the U.S. is impacted will depend on the winds and how air is transported across those regions. So far, the fire season in the United States has been below average, but fire risk is expected to increase modestly through the summer, so we may see local smoke influences as well.

    Q: How has air pollution from wildfires affected human health in the U.S. this year so far?

    Selin: The pollutant of most concern in wildfire smoke is fine particulate matter (PM2.5) – fine particles in the atmosphere that can be inhaled deep into the lungs, causing health damages. Exposure to PM2.5 causes respiratory and cardiovascular damage, including heart attacks and premature deaths. It can also cause symptoms like coughing and difficulty breathing. In New England this week, people have been breathing much higher concentrations of PM2.5 than usual. People who are particularly vulnerable to the effects are likely experiencing more severe impacts, such as older people and people with underlying conditions. But PM2.5 affects everyone. While the number and impact of wildfires varies from year to year, the associated air pollution from them generally lead to tens of thousands of premature deaths in the U.S. overall annually. There is also some evidence that PM2.5 from fires could be particularly damaging to health.

    While we in New England usually have relatively lower levels of pollution, it’s important also to note that some cities around the globe experience very high PM2.5 on a regular basis, not only from wildfires, but other sources such as power plants and industry. So, while we’re feeling the effects over the past few days, we should remember the broader importance of reducing PM2.5 levels overall for human health everywhere.

    Q: While firefighters battle fires directly this wildfire season, what can we do to reduce the effects of associated air pollution? And what can we do in the long-term, to prevent or reduce wildfire impacts?

    Selin: In the short term, protecting yourself from the impacts of PM2.5 is important. Limiting time outdoors, avoiding outdoor exercise, and wearing a high-quality mask are some strategies that can minimize exposure. Air filters can help reduce the concentrations of particles in indoor air. Taking measures to avoid exposure is particularly important for vulnerable groups. It’s also important to note that these strategies aren’t equally possible for everyone (for example, people who work outside) — stressing the importance of developing new strategies to address the underlying causes of increasing wildfires.

    Over the long term, mitigating climate change is important — because warm and dry conditions lead to wildfires, warming increases fire risk. Preventing the fires that are ignited by people or human activities can help.  Another way that damages can be mitigated in the longer term is by exploring land management strategies that could help manage fire intensity. More

  • in

    Bringing the social and ethical responsibilities of computing to the forefront

    There has been a remarkable surge in the use of algorithms and artificial intelligence to address a wide range of problems and challenges. While their adoption, particularly with the rise of AI, is reshaping nearly every industry sector, discipline, and area of research, such innovations often expose unexpected consequences that involve new norms, new expectations, and new rules and laws.

    To facilitate deeper understanding, the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative in the MIT Schwarzman College of Computing, recently brought together social scientists and humanists with computer scientists, engineers, and other computing faculty for an exploration of the ways in which the broad applicability of algorithms and AI has presented both opportunities and challenges in many aspects of society.

    “The very nature of our reality is changing. AI has the ability to do things that until recently were solely the realm of human intelligence — things that can challenge our understanding of what it means to be human,” remarked Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, in his opening address at the inaugural SERC Symposium. “This poses philosophical, conceptual, and practical questions on a scale not experienced since the start of the Enlightenment. In the face of such profound change, we need new conceptual maps for navigating the change.”

    The symposium offered a glimpse into the vision and activities of SERC in both research and education. “We believe our responsibility with SERC is to educate and equip our students and enable our faculty to contribute to responsible technology development and deployment,” said Georgia Perakis, the William F. Pounds Professor of Management in the MIT Sloan School of Management, co-associate dean of SERC, and the lead organizer of the symposium. “We’re drawing from the many strengths and diversity of disciplines across MIT and beyond and bringing them together to gain multiple viewpoints.”

    Through a succession of panels and sessions, the symposium delved into a variety of topics related to the societal and ethical dimensions of computing. In addition, 37 undergraduate and graduate students from a range of majors, including urban studies and planning, political science, mathematics, biology, electrical engineering and computer science, and brain and cognitive sciences, participated in a poster session to exhibit their research in this space, covering such topics as quantum ethics, AI collusion in storage markets, computing waste, and empowering users on social platforms for better content credibility.

    Showcasing a diversity of work

    In three sessions devoted to themes of beneficent and fair computing, equitable and personalized health, and algorithms and humans, the SERC Symposium showcased work by 12 faculty members across these domains.

    One such project from a multidisciplinary team of archaeologists, architects, digital artists, and computational social scientists aimed to preserve endangered heritage sites in Afghanistan with digital twins. The project team produced highly detailed interrogable 3D models of the heritage sites, in addition to extended reality and virtual reality experiences, as learning resources for audiences that cannot access these sites.

    In a project for the United Network for Organ Sharing, researchers showed how they used applied analytics to optimize various facets of an organ allocation system in the United States that is currently undergoing a major overhaul in order to make it more efficient, equitable, and inclusive for different racial, age, and gender groups, among others.

    Another talk discussed an area that has not yet received adequate public attention: the broader implications for equity that biased sensor data holds for the next generation of models in computing and health care.

    A talk on bias in algorithms considered both human bias and algorithmic bias, and the potential for improving results by taking into account differences in the nature of the two kinds of bias.

    Other highlighted research included the interaction between online platforms and human psychology; a study on whether decision-makers make systemic prediction mistakes on the available information; and an illustration of how advanced analytics and computation can be leveraged to inform supply chain management, operations, and regulatory work in the food and pharmaceutical industries.

    Improving the algorithms of tomorrow

    “Algorithms are, without question, impacting every aspect of our lives,” said Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science, in kicking off a panel she moderated on the implications of data and algorithms.

    “Whether it’s in the context of social media, online commerce, automated tasks, and now a much wider range of creative interactions with the advent of generative AI tools and large language models, there’s little doubt that much more is to come,” Ozdaglar said. “While the promise is evident to all of us, there’s a lot to be concerned as well. This is very much time for imaginative thinking and careful deliberation to improve the algorithms of tomorrow.”

    Turning to the panel, Ozdaglar asked experts from computing, social science, and data science for insights on how to understand what is to come and shape it to enrich outcomes for the majority of humanity.

    Sarah Williams, associate professor of technology and urban planning at MIT, emphasized the critical importance of comprehending the process of how datasets are assembled, as data are the foundation for all models. She also stressed the need for research to address the potential implication of biases in algorithms that often find their way in through their creators and the data used in their development. “It’s up to us to think about our own ethical solutions to these problems,” she said. “Just as it’s important to progress with the technology, we need to start the field of looking at these questions of what biases are in the algorithms? What biases are in the data, or in that data’s journey?”

    Shifting focus to generative models and whether the development and use of these technologies should be regulated, the panelists — which also included MIT’s Srini Devadas, professor of electrical engineering and computer science, John Horton, professor of information technology, and Simon Johnson, professor of entrepreneurship — all concurred that regulating open-source algorithms, which are publicly accessible, would be difficult given that regulators are still catching up and struggling to even set guardrails for technology that is now 20 years old.

    Returning to the question of how to effectively regulate the use of these technologies, Johnson proposed a progressive corporate tax system as a potential solution. He recommends basing companies’ tax payments on their profits, especially for large corporations whose massive earnings go largely untaxed due to offshore banking. By doing so, Johnson said that this approach can serve as a regulatory mechanism that discourages companies from trying to “own the entire world” by imposing disincentives.

    The role of ethics in computing education

    As computing continues to advance with no signs of slowing down, it is critical to educate students to be intentional in the social impact of the technologies they will be developing and deploying into the world. But can one actually be taught such things? If so, how?

    Caspar Hare, professor of philosophy at MIT and co-associate dean of SERC, posed this looming question to faculty on a panel he moderated on the role of ethics in computing education. All experienced in teaching ethics and thinking about the social implications of computing, each panelist shared their perspective and approach.

    A strong advocate for the importance of learning from history, Eden Medina, associate professor of science, technology, and society at MIT, said that “often the way we frame computing is that everything is new. One of the things that I do in my teaching is look at how people have confronted these issues in the past and try to draw from them as a way to think about possible ways forward.” Medina regularly uses case studies in her classes and referred to a paper written by Yale University science historian Joanna Radin on the Pima Indian Diabetes Dataset that raised ethical issues on the history of that particular collection of data that many don’t consider as an example of how decisions around technology and data can grow out of very specific contexts.

    Milo Phillips-Brown, associate professor of philosophy at Oxford University, talked about the Ethical Computing Protocol that he co-created while he was a SERC postdoc at MIT. The protocol, a four-step approach to building technology responsibly, is designed to train computer science students to think in a better and more accurate way about the social implications of technology by breaking the process down into more manageable steps. “The basic approach that we take very much draws on the fields of value-sensitive design, responsible research and innovation, participatory design as guiding insights, and then is also fundamentally interdisciplinary,” he said.

    Fields such as biomedicine and law have an ethics ecosystem that distributes the function of ethical reasoning in these areas. Oversight and regulation are provided to guide front-line stakeholders and decision-makers when issues arise, as are training programs and access to interdisciplinary expertise that they can draw from. “In this space, we have none of that,” said John Basl, associate professor of philosophy at Northeastern University. “For current generations of computer scientists and other decision-makers, we’re actually making them do the ethical reasoning on their own.” Basl commented further that teaching core ethical reasoning skills across the curriculum, not just in philosophy classes, is essential, and that the goal shouldn’t be for every computer scientist be a professional ethicist, but for them to know enough of the landscape to be able to ask the right questions and seek out the relevant expertise and resources that exists.

    After the final session, interdisciplinary groups of faculty, students, and researchers engaged in animated discussions related to the issues covered throughout the day during a reception that marked the conclusion of the symposium. More

  • in

    MIT researchers make language models scalable self-learners

    Socrates once said: “It is not the size of a thing, but the quality that truly matters. For it is in the nature of substance, not its volume, that true value is found.”

    Does size always matter for large language models (LLMs)? In a technological landscape bedazzled by LLMs taking center stage, a team of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers think smaller models shouldn’t be overlooked, especially for natural language understanding products widely deployed in the industry.

    To that end, the researchers cooked up an approach to long-standing problems of inefficiency and privacy associated with big, text-based AI models — a logic-aware model that outperforms 500-times-bigger counterparts on some language understanding tasks without human-generated annotations, while preserving privacy and robustness with high performance.

    LLMs, which have shown some promising skills in generating language, art, and code, are computationally expensive, and their data requirements can risk privacy leaks when using application programming interfaces for data upload. Smaller models have been historically less capable, particularly in multitasking and weakly supervised tasks, compared to their larger counterparts.

    So what’s helping these smaller models act so mighty, then? Something called “textual entailment,” a way to help these models understand a variety of language tasks, where if one sentence (the premise) is true, then the other sentence (the hypothesis) is likely to be true as well. For example, if the premise is, “all cats have tails” then the hypothesis “a tabby cat has a tail” would be entailed by the premise. This concept is used to train an “entailment model” that proved to be less biased than other language models, from the team’s previous research. They then created “prompts” that the models can use to figure out if certain information is entailed by a given sentence or phrase according to different tasks. This method improved the model’s ability to adapt to different tasks without any additional training, known as zero-shot adaptation.

    In the realm of “natural language understanding,” there are various applications that hinge on determining the relationship between two pieces of text. For example, in sentiment classification, a statement like “I think the movie is good” can be inferred or entailed from a movie review that says, “I like the story and the acting is great,” indicating a positive sentiment. Another is news classification, where the topic of a news article can be inferred from its content. For example, a statement like “the news article is about sports” can be entailed if the main content of the article reports on an NBA game. The key insight was that many existing natural language understanding tasks could be recast as an entailment (i.e., logical inference in natural language) task. 

    “Our research is about improving the ability of computer programs to understand and process natural language — the way humans speak and write. Our self-trained, 350-million-parameter entailment models, without human-generated labels, outperform supervised language models with 137 to 175 billion parameters,” says MIT CSAIL postdoc Hongyin Luo, lead author on a new paper about the study. “This has potential to reshape the landscape of AI and machine learning, providing a more scalable, trustworthy, and cost-effective solution to language modeling,” says Luo. “By proving that smaller models can perform at the same level as larger ones for language understanding, this work paves the way for more sustainable and privacy-preserving AI technologies.” 

    The team discovered that they could improve the model’s performance even more by using a technique called “self-training,” where the model uses its own predictions to teach itself, effectively learning without human supervision and additional annotated training data.The self-training method significantly improved performance on a bunch of downstream tasks, including sentiment analysis, question-answering, and news classification. It outperformed both Google’s LaMDA and FLAN in zero-shot capabilities, GPT models, and other supervised algorithms. 

    However, one challenge with self-training is that the model can sometimes generate incorrect or noisy labels that harm performance. To overcome this, they developed a new algorithm called ‘SimPLE’ (Simple Pseudo-Label Editing), a process to review and modify the pseudo-labels made in initial rounds of learning. By correcting any mislabeled instances, it improved the overall quality of the self-generated labels. This not only made the models more effective at understanding language, but more robust when faced with adversarial data. 

    As with most research, there are some limitations. The self-training on multi-class classification tasks didn’t perform as well as on binary natural language understanding tasks, indicating the challenge of applying entailment models to multi-choice tasks.“This research presents an efficient and effective way to train large language models (LLMs) by formulating natural language understanding tasks as contextual entailment problems and employing a pseudo-labeling self-training mechanism to incorporate large quantities of unlabelled text data in the training process,” adds CSAIL Senior Research Scientist James Glass, who is also an author on the paper. “While the field of LLMs is undergoing rapid and dramatic changes, this research shows that it is possible to produce relatively compact language models that perform very well on benchmark understanding tasks compared to their peers of roughly the same size, or even much larger language models.”

    “Entailment task is a popular proxy to evaluate “understanding” of a given context by an AI model,” says Leonid Karlinsky, research staff member at the MIT-IBM Watson AI Lab. “It is used in many areas analyzing models with unimodal, like LLMs, and and multi-modal, like VLMs [visual language models] inputs, simplifying the task of question-answering about a given input context to a binary classification problem — does this context entail a certain (e.g., text) conclusion or not? This paper makes two contributions in this space. First, it proposes a way to improve the zero-shot (without additional tuning) NLU performance and robustness to adversarial attacks via tuning with synthesized (specialized) entailment tasks generated for the primal NLU task. Second, it offers a self-supervised SimPLE method including pseudo-labeling and confidence-based filtering to further improve large LLMs’ NLU performance.”

    Luo and Glass wrote the paper with Yoon Kim, a CSAIL member and assistant professor in MIT’s Department of Electrical Engineering and Computer Science, and Jiaxin Ge of Peking University. Their work will be presented at the meeting of the Association for Computational Linguistics in Toronto, Ontario this July. This research was supported by a grant from the Hong Kong Innovation AI program. More