More stories

  • in

    Researchers use large language models to help robots navigate

    Someday, you may want your home robot to carry a load of dirty clothes downstairs and deposit them in the washing machine in the far-left corner of the basement. The robot will need to combine your instructions with its visual observations to determine the steps it should take to complete this task.For an AI agent, this is easier said than done. Current approaches often utilize multiple hand-crafted machine-learning models to tackle different parts of the task, which require a great deal of human effort and expertise to build. These methods, which use visual representations to directly make navigation decisions, demand massive amounts of visual data for training, which are often hard to come by.To overcome these challenges, researchers from MIT and the MIT-IBM Watson AI Lab devised a navigation method that converts visual representations into pieces of language, which are then fed into one large language model that achieves all parts of the multistep navigation task.Rather than encoding visual features from images of a robot’s surroundings as visual representations, which is computationally intensive, their method creates text captions that describe the robot’s point-of-view. A large language model uses the captions to predict the actions a robot should take to fulfill a user’s language-based instructions.Because their method utilizes purely language-based representations, they can use a large language model to efficiently generate a huge amount of synthetic training data.While this approach does not outperform techniques that use visual features, it performs well in situations that lack enough visual data for training. The researchers found that combining their language-based inputs with visual signals leads to better navigation performance.“By purely using language as the perceptual representation, ours is a more straightforward approach. Since all the inputs can be encoded as language, we can generate a human-understandable trajectory,” says Bowen Pan, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this approach.Pan’s co-authors include his advisor, Aude Oliva, director of strategic industry engagement at the MIT Schwarzman College of Computing, MIT director of the MIT-IBM Watson AI Lab, and a senior research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL); Philip Isola, an associate professor of EECS and a member of CSAIL; senior author Yoon Kim, an assistant professor of EECS and a member of CSAIL; and others at the MIT-IBM Watson AI Lab and Dartmouth College. The research will be presented at the Conference of the North American Chapter of the Association for Computational Linguistics.Solving a vision problem with languageSince large language models are the most powerful machine-learning models available, the researchers sought to incorporate them into the complex task known as vision-and-language navigation, Pan says.But such models take text-based inputs and can’t process visual data from a robot’s camera. So, the team needed to find a way to use language instead.Their technique utilizes a simple captioning model to obtain text descriptions of a robot’s visual observations. These captions are combined with language-based instructions and fed into a large language model, which decides what navigation step the robot should take next.The large language model outputs a caption of the scene the robot should see after completing that step. This is used to update the trajectory history so the robot can keep track of where it has been.The model repeats these processes to generate a trajectory that guides the robot to its goal, one step at a time.To streamline the process, the researchers designed templates so observation information is presented to the model in a standard form — as a series of choices the robot can make based on its surroundings.For instance, a caption might say “to your 30-degree left is a door with a potted plant beside it, to your back is a small office with a desk and a computer,” etc. The model chooses whether the robot should move toward the door or the office.“One of the biggest challenges was figuring out how to encode this kind of information into language in a proper way to make the agent understand what the task is and how they should respond,” Pan says.Advantages of languageWhen they tested this approach, while it could not outperform vision-based techniques, they found that it offered several advantages.First, because text requires fewer computational resources to synthesize than complex image data, their method can be used to rapidly generate synthetic training data. In one test, they generated 10,000 synthetic trajectories based on 10 real-world, visual trajectories.The technique can also bridge the gap that can prevent an agent trained with a simulated environment from performing well in the real world. This gap often occurs because computer-generated images can appear quite different from real-world scenes due to elements like lighting or color. But language that describes a synthetic versus a real image would be much harder to tell apart, Pan says. Also, the representations their model uses are easier for a human to understand because they are written in natural language.“If the agent fails to reach its goal, we can more easily determine where it failed and why it failed. Maybe the history information is not clear enough or the observation ignores some important details,” Pan says.In addition, their method could be applied more easily to varied tasks and environments because it uses only one type of input. As long as data can be encoded as language, they can use the same model without making any modifications.But one disadvantage is that their method naturally loses some information that would be captured by vision-based models, such as depth information.However, the researchers were surprised to see that combining language-based representations with vision-based methods improves an agent’s ability to navigate.“Maybe this means that language can capture some higher-level information than cannot be captured with pure vision features,” he says.This is one area the researchers want to continue exploring. They also want to develop a navigation-oriented captioner that could boost the method’s performance. In addition, they want to probe the ability of large language models to exhibit spatial awareness and see how this could aid language-based navigation.This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    Making climate models relevant for local decision-makers

    Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. “It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. Traditional wisdomIn climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. “If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing power to run, while also being expensive. A little bit of both In their new paper, Saha and Ravela have figured out a way to add the data another way. They’ve employed a technique in machine learning called adversarial learning. It uses two machines: One generates data to go into our photo. But the other machine judges the sample by comparing it to actual data. If it thinks the image is fake, then the first machine has to try again until it convinces the second machine. The end-goal of the process is to create super-resolution data. Using machine learning techniques like adversarial learning is not a new idea in climate modeling; where it currently struggles is its inability to handle large amounts of basic physics, like conservation laws. The researchers discovered that simplifying the physics going in and supplementing it with statistics from the historical data was enough to generate the results they needed. “If you augment machine learning with some information from the statistics and simplified physics both, then suddenly, it’s magical,” says Ravela. He and Saha started with estimating extreme rainfall amounts by removing more complex physics equations and focusing on water vapor and land topography. They then generated general rainfall patterns for mountainous Denver and flat Chicago alike, applying historical accounts to correct the output. “It’s giving us extremes, like the physics does, at a much lower cost. And it’s giving us similar speeds to statistics, but at much higher resolution.” Another unexpected benefit of the results was how little training data was needed. “The fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the ML [machine learning] model … was actually not obvious from the beginning,” says Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run. Quantifying risk quicklyBeing able to run the models quickly and often is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: By seeing how extreme weather events will impact the country, decisions about what crops should be grown or where populations should migrate to can be made considering a very broad range of conditions and uncertainties as soon as possible.“We can’t wait months or years to be able to quantify this risk,” he says. “You need to look out way into the future and at a large number of uncertainties to be able to say what might be a good decision.”While the current model only looks at extreme precipitation, training it to examine other critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model, Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of a Climate Grand Challenges project.“We’re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,” he says.  More

  • in

    A technique for more effective multipurpose robots

    Let’s say you want to train a robot so it understands how to use tools and can then quickly learn to make repairs around your house with a hammer, wrench, and screwdriver. To do that, you would need an enormous amount of data demonstrating tool use.Existing robotic datasets vary widely in modality — some include color images while others are composed of tactile imprints, for instance. Data could also be collected in different domains, like simulation or human demos. And each dataset may capture a unique task and environment.It is difficult to efficiently incorporate data from so many sources in one machine-learning model, so many methods use just one type of data to train a robot. But robots trained this way, with a relatively small amount of task-specific data, are often unable to perform new tasks in unfamiliar environments.In an effort to train better multipurpose robots, MIT researchers developed a technique to combine multiple sources of data across domains, modalities, and tasks using a type of generative AI known as diffusion models.They train a separate diffusion model to learn a strategy, or policy, for completing one task using one specific dataset. Then they combine the policies learned by the diffusion models into a general policy that enables a robot to perform multiple tasks in various settings.In simulations and real-world experiments, this training approach enabled a robot to perform multiple tool-use tasks and adapt to new tasks it did not see during training. The method, known as Policy Composition (PoCo), led to a 20 percent improvement in task performance when compared to baseline techniques.“Addressing heterogeneity in robotic datasets is like a chicken-egg problem. If we want to use a lot of data to train general robot policies, then we first need deployable robots to get all this data. I think that leveraging all the heterogeneous data available, similar to what researchers have done with ChatGPT, is an important step for the robotics field,” says Lirui Wang, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on PoCo.     Wang’s coauthors include Jialiang Zhao, a mechanical engineering graduate student; Yilun Du, an EECS graduate student; Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering, and a member of CSAIL. The research will be presented at the Robotics: Science and Systems Conference.Combining disparate datasetsA robotic policy is a machine-learning model that takes inputs and uses them to perform an action. One way to think about a policy is as a strategy. In the case of a robotic arm, that strategy might be a trajectory, or a series of poses that move the arm so it picks up a hammer and uses it to pound a nail.Datasets used to learn robotic policies are typically small and focused on one particular task and environment, like packing items into boxes in a warehouse.“Every single robotic warehouse is generating terabytes of data, but it only belongs to that specific robot installation working on those packages. It is not ideal if you want to use all of these data to train a general machine,” Wang says.The MIT researchers developed a technique that can take a series of smaller datasets, like those gathered from many robotic warehouses, learn separate policies from each one, and combine the policies in a way that enables a robot to generalize to many tasks.They represent each policy using a type of generative AI model known as a diffusion model. Diffusion models, often used for image generation, learn to create new data samples that resemble samples in a training dataset by iteratively refining their output.But rather than teaching a diffusion model to generate images, the researchers teach it to generate a trajectory for a robot. They do this by adding noise to the trajectories in a training dataset. The diffusion model gradually removes the noise and refines its output into a trajectory.This technique, known as Diffusion Policy, was previously introduced by researchers at MIT, Columbia University, and the Toyota Research Institute. PoCo builds off this Diffusion Policy work. The team trains each diffusion model with a different type of dataset, such as one with human video demonstrations and another gleaned from teleoperation of a robotic arm.Then the researchers perform a weighted combination of the individual policies learned by all the diffusion models, iteratively refining the output so the combined policy satisfies the objectives of each individual policy.Greater than the sum of its parts“One of the benefits of this approach is that we can combine policies to get the best of both worlds. For instance, a policy trained on real-world data might be able to achieve more dexterity, while a policy trained on simulation might be able to achieve more generalization,” Wang says.

    With policy composition, researchers are able to combine datasets from multiple sources so they can teach a robot to effectively use a wide range of tools, like a hammer, screwdriver, or this spatula.Image: Courtesy of the researchers

    Because the policies are trained separately, one could mix and match diffusion policies to achieve better results for a certain task. A user could also add data in a new modality or domain by training an additional Diffusion Policy with that dataset, rather than starting the entire process from scratch.

    The policy composition technique the researchers developed can be used to effectively teach a robot to use tools even when objects are placed around it to try and distract it from its task, as seen here.Image: Courtesy of the researchers

    The researchers tested PoCo in simulation and on real robotic arms that performed a variety of tools tasks, such as using a hammer to pound a nail and flipping an object with a spatula. PoCo led to a 20 percent improvement in task performance compared to baseline methods.“The striking thing was that when we finished tuning and visualized it, we can clearly see that the composed trajectory looks much better than either one of them individually,” Wang says.In the future, the researchers want to apply this technique to long-horizon tasks where a robot would pick up one tool, use it, then switch to another tool. They also want to incorporate larger robotics datasets to improve performance.“We will need all three kinds of data to succeed for robotics: internet data, simulation data, and real robot data. How to combine them effectively will be the million-dollar question. PoCo is a solid step on the right track,” says Jim Fan, senior research scientist at NVIDIA and leader of the AI Agents Initiative, who was not involved with this work.This research is funded, in part, by Amazon, the Singapore Defense Science and Technology Agency, the U.S. National Science Foundation, and the Toyota Research Institute. More

  • in

    Looking for a specific action in a video? This AI-based method can find it for you

    The internet is awash in instructional videos that can teach curious viewers everything from cooking the perfect pancake to performing a life-saving Heimlich maneuver.But pinpointing when and where a particular action happens in a long video can be tedious. To streamline the process, scientists are trying to teach computers to perform this task. Ideally, a user could just describe the action they’re looking for, and an AI model would skip to its location in the video.However, teaching machine-learning models to do this usually requires a great deal of expensive video data that have been painstakingly hand-labeled.A new, more efficient approach from researchers at MIT and the MIT-IBM Watson AI Lab trains a model to perform this task, known as spatio-temporal grounding, using only videos and their automatically generated transcripts.The researchers teach a model to understand an unlabeled video in two distinct ways: by looking at small details to figure out where objects are located (spatial information) and looking at the bigger picture to understand when the action occurs (temporal information).Compared to other AI approaches, their method more accurately identifies actions in longer videos with multiple activities. Interestingly, they found that simultaneously training on spatial and temporal information makes a model better at identifying each individually.In addition to streamlining online learning and virtual training processes, this technique could also be useful in health care settings by rapidly finding key moments in videos of diagnostic procedures, for example.“We disentangle the challenge of trying to encode spatial and temporal information all at once and instead think about it like two experts working on their own, which turns out to be a more explicit way to encode the information. Our model, which combines these two separate branches, leads to the best performance,” says Brian Chen, lead author of a paper on this technique.Chen, a 2023 graduate of Columbia University who conducted this research while a visiting student at the MIT-IBM Watson AI Lab, is joined on the paper by James Glass, senior research scientist, member of the MIT-IBM Watson AI Lab, and head of the Spoken Language Systems Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); Hilde Kuehne, a member of the MIT-IBM Watson AI Lab who is also affiliated with Goethe University Frankfurt; and others at MIT, Goethe University, the MIT-IBM Watson AI Lab, and Quality Match GmbH. The research will be presented at the Conference on Computer Vision and Pattern Recognition.Global and local learningResearchers usually teach models to perform spatio-temporal grounding using videos in which humans have annotated the start and end times of particular tasks.Not only is generating these data expensive, but it can be difficult for humans to figure out exactly what to label. If the action is “cooking a pancake,” does that action start when the chef begins mixing the batter or when she pours it into the pan?“This time, the task may be about cooking, but next time, it might be about fixing a car. There are so many different domains for people to annotate. But if we can learn everything without labels, it is a more general solution,” Chen says.For their approach, the researchers use unlabeled instructional videos and accompanying text transcripts from a website like YouTube as training data. These don’t need any special preparation.They split the training process into two pieces. For one, they teach a machine-learning model to look at the entire video to understand what actions happen at certain times. This high-level information is called a global representation.For the second, they teach the model to focus on a specific region in parts of the video where action is happening. In a large kitchen, for instance, the model might only need to focus on the wooden spoon a chef is using to mix pancake batter, rather than the entire counter. This fine-grained information is called a local representation.The researchers incorporate an additional component into their framework to mitigate misalignments that occur between narration and video. Perhaps the chef talks about cooking the pancake first and performs the action later.To develop a more realistic solution, the researchers focused on uncut videos that are several minutes long. In contrast, most AI techniques train using few-second clips that someone trimmed to show only one action.A new benchmarkBut when they came to evaluate their approach, the researchers couldn’t find an effective benchmark for testing a model on these longer, uncut videos — so they created one.To build their benchmark dataset, the researchers devised a new annotation technique that works well for identifying multistep actions. They had users mark the intersection of objects, like the point where a knife edge cuts a tomato, rather than drawing a box around important objects.“This is more clearly defined and speeds up the annotation process, which reduces the human labor and cost,” Chen says.Plus, having multiple people do point annotation on the same video can better capture actions that occur over time, like the flow of milk being poured. All annotators won’t mark the exact same point in the flow of liquid.When they used this benchmark to test their approach, the researchers found that it was more accurate at pinpointing actions than other AI techniques.Their method was also better at focusing on human-object interactions. For instance, if the action is “serving a pancake,” many other approaches might focus only on key objects, like a stack of pancakes sitting on a counter. Instead, their method focuses on the actual moment when the chef flips a pancake onto a plate.Next, the researchers plan to enhance their approach so models can automatically detect when text and narration are not aligned, and switch focus from one modality to the other. They also want to extend their framework to audio data, since there are usually strong correlations between actions and the sounds objects make.“AI research has made incredible progress towards creating models like ChatGPT that understand images. But our progress on understanding video is far behind. This work represents a significant step forward in that direction,” says Kate Saenko, a professor in the Department of Computer Science at Boston University who was not involved with this work.This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    Turning up the heat on next-generation semiconductors

    The scorching surface of Venus, where temperatures can climb to 480 degrees Celsius (hot enough to melt lead), is an inhospitable place for humans and machines alike. One reason scientists have not yet been able to send a rover to the planet’s surface is because silicon-based electronics can’t operate in such extreme temperatures for an extended period of time.For high-temperature applications like Venus exploration, researchers have recently turned to gallium nitride, a unique material that can withstand temperatures of 500 degrees or more.The material is already used in some terrestrial electronics, like phone chargers and cell phone towers, but scientists don’t have a good grasp of how gallium nitride devices would behave at temperatures beyond 300 degrees, which is the operational limit of conventional silicon electronics.In a new paper published in Applied Physics Letters, which is part of a multiyear research effort, a team of scientists from MIT and elsewhere sought to answer key questions about the material’s properties and performance at extremely high temperatures.  They studied the impact of temperature on the ohmic contacts in a gallium nitride device. Ohmic contacts are key components that connect a semiconductor device with the outside world.The researchers found that extreme temperatures didn’t cause significant degradation to the gallium nitride material or contacts. They were surprised to see that the contacts remained structurally intact even when held at 500 degrees Celsius for 48 hours.Understanding how contacts perform at extreme temperatures is an important step toward the group’s next goal of developing high-performance transistors that could operate on the surface of Venus. Such transistors could also be used on Earth in electronics for applications like extracting geothermal energy or monitoring the inside of jet engines.“Transistors are the heart of most modern electronics, but we didn’t want to jump straight to making a gallium nitride transistor because so much could go wrong. We first wanted to make sure the material and contacts could survive, and figure out how much they change as you increase the temperature. We’ll design our transistor from these basic material building blocks,” says John Niroula, an electrical engineering and computer science (EECS) graduate student and lead author of the paper.His co-authors include Qingyun Xie PhD ’24; Mengyang Yuan PhD ’22; EECS graduate students Patrick K. Darmawi-Iskandar and Pradyot Yadav; Gillian K. Micale, a graduate student in the Department of Materials Science and Engineering; senior author Tomás Palacios, the Clarence J. LeBel Professor of EECS, director of the Microsystems Technology Laboratories, and a member of the Research Laboratory of Electronics; as well as collaborators Nitul S. Rajput of the Technology Innovation Institute of the United Arab Emirates; Siddharth Rajan of Ohio State University; Yuji Zhao of Rice University; and Nadim Chowdhury of Bangladesh University of Engineering and Technology.Turning up the heatWhile gallium nitride has recently attracted much attention, the material is still decades behind silicon when it comes to scientists’ understanding of how its properties change under different conditions. One such property is resistance, the flow of electrical current through a material.A device’s overall resistance is inversely proportional to its size. But devices like semiconductors have contacts that connect them to other electronics. Contact resistance, which is caused by these electrical connections, remains fixed no matter the size of the device. Too much contact resistance can lead to higher power dissipation and slower operating frequencies for electronic circuits.“Especially when you go to smaller dimensions, a device’s performance often ends up being limited by contact resistance. People have a relatively good understanding of contact resistance at room temperature, but no one has really studied what happens when you go all the way up to 500 degrees,” Niroula says.For their study, the researchers used facilities at MIT.nano to build gallium nitride devices known as transfer length method structures, which are composed of a series of resistors. These devices enable them to measure the resistance of both the material and the contacts.They added ohmic contacts to these devices using the two most common methods. The first involves depositing metal onto gallium nitride and heating it to 825 degrees Celsius for about 30 seconds, a process called annealing.The second method involves removing chunks of gallium nitride and using a high-temperature technology to regrow highly doped gallium nitride in its place, a process led by Rajan and his team at Ohio State. The highly doped material contains extra electrons that can contribute to current conduction.“The regrowth method typically leads to lower contact resistance at room temperature, but we wanted to see if these methods still work well at high temperatures,” Niroula says.A comprehensive approachThey tested devices in two ways. Their collaborators at Rice University, led by Zhao, conducted short-term tests by placing devices on a hot chuck that reached 500 degrees Celsius and taking immediate resistance measurements.At MIT, they conducted longer-term experiments by placing devices into a specialized furnace the group previously developed. They left devices inside for up to 72 hours to measure how resistance changes as a function of temperature and time.Microscopy experts at MIT.nano (Aubrey N. Penn) and the Technology Innovation Institute (Nitul S. Rajput) used state-of-the-art transmission electron microscopes to see how such high temperatures affect gallium nitride and the ohmic contacts at the atomic level.“We went in thinking the contacts or the gallium nitride material itself would degrade significantly, but we found the opposite. Contacts made with both methods seemed to be remarkably stable,” says Niroula.While it is difficult to measure resistance at such high temperatures, their results indicate that contact resistance seems to remain constant even at temperatures of 500 degrees, for around 48 hours. And just like at room temperature, the regrowth process led to better performance.The material did start to degrade after being in the furnace for 48 hours, but the researchers are already working to boost long-term performance. One strategy involves adding protective insulators to keep the material from being directly exposed to the high-temperature environment.Moving forward, the researchers plan to use what they learned in these experiments to develop high-temperature gallium nitride transistors.“In our group, we focus on innovative, device-level research to advance the frontiers of microelectronics, while adopting a systematic approach across the hierarchy, from the material level to the circuit level. Here, we have gone all the way down to the material level to understand things in depth. In other words, we have translated device-level advancements to circuit-level impact for high-temperature electronics, through design, modeling and complex fabrication. We are also immensely fortunate to have forged close partnerships with our longtime collaborators in this journey,” Xie says.This work was funded, in part, by the U.S. Air Force Office of Scientific Research, Lockheed Martin Corporation, the Semiconductor Research Corporation through the U.S. Defense Advanced Research Projects Agency, the U.S. Department of Energy, Intel Corporation, and the Bangladesh University of Engineering and Technology.Fabrication and microscopy were conducted at MIT.nano, the Semiconductor Epitaxy and Analysis Laboratory at Ohio State University, the Center for Advanced Materials Characterization at the University of Oregon, and the Technology Innovation Institute of the United Arab Emirates. More

  • in

    Exploring the mysterious alphabet of sperm whales

    The allure of whales has stoked human consciousness for millennia, casting these ocean giants as enigmatic residents of the deep seas. From the biblical Leviathan to Herman Melville’s formidable Moby Dick, whales have been central to mythologies and folklore. And while cetology, or whale science, has improved our knowledge of these marine mammals in the past century in particular, studying whales has remained a formidable a challenge.Now, thanks to machine learning, we’re a little closer to understanding these gentle giants. Researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Project CETI (Cetacean Translation Initiative) recently used algorithms to decode the “sperm whale phonetic alphabet,” revealing sophisticated structures in sperm whale communication akin to human phonetics and communication systems in other animal species. In a new open-access study published in Nature Communications, the research shows that sperm whales codas, or short bursts of clicks that they use to communicate, vary significantly in structure depending on the conversational context, revealing a communication system far more intricate than previously understood. 

    Play video

    The Secret Language of Sperm Whales, DecodedVideo: MIT CSAIL

    Nine thousand codas, collected from Eastern Caribbean sperm whale families observed by the Dominica Sperm Whale Project, proved an instrumental starting point in uncovering the creatures’ complex communication system. Alongside the data gold mine, the team used a mix of algorithms for pattern recognition and classification, as well as on-body recording equipment. It turned out that sperm whale communications were indeed not random or simplistic, but rather structured in a complex, combinatorial manner. The researchers identified something of a “sperm whale phonetic alphabet,” where various elements that researchers call  “rhythm,” “tempo,” “rubato,” and “ornamentation” interplay to form a vast array of distinguishable codas. For example, the whales would systematically modulate certain aspects of their codas based on the conversational context, such as smoothly varying the duration of the calls — rubato — or adding extra ornamental clicks. But even more remarkably, they found that the basic building blocks of these codas could be combined in a combinatorial fashion, allowing the whales to construct a vast repertoire of distinct vocalizations.The experiments were conducted using acoustic bio-logging tags (specifically something called “D-tags”) deployed on whales from the Eastern Caribbean clan. These tags captured the intricate details of the whales’ vocal patterns. By developing new visualization and data analysis techniques, the CSAIL researchers found that individual sperm whales could emit various coda patterns in long exchanges, not just repeats of the same coda. These patterns, they say, are nuanced, and include fine-grained variations that other whales also produce and recognize.“We are venturing into the unknown, to decipher the mysteries of sperm whale communication without any pre-existing ground truth data,” says Daniela Rus, CSAIL director and professor of electrical engineering and computer science (EECS) at MIT. “Using machine learning is important for identifying the features of their communications and predicting what they say next. Our findings indicate the presence of structured information content and also challenges the prevailing belief among many linguists that complex communication is unique to humans. This is a step toward showing that other species have levels of communication complexity that have not been identified so far, deeply connected to behavior. Our next steps aim to decipher the meaning behind these communications and explore the societal-level correlations between what is being said and group actions.”Whaling aroundSperm whales have the largest brains among all known animals. This is accompanied by very complex social behaviors between families and cultural groups, necessitating strong communication for coordination, especially in pressurized environments like deep sea hunting.Whales owe much to Roger Payne, former Project CETI advisor, whale biologist, conservationist, and MacArthur Fellow who was a major figure in elucidating their musical careers. In the noted 1971 Science article “Songs of Humpback Whales,” Payne documented how whales can sing. His work later catalyzed the “Save the Whales” movement, a successful and timely conservation initiative.“Roger’s research highlights the impact science can have on society. His finding that whales sing led to the marine mammal protection act and helped save several whale species from extinction. This interdisciplinary research now brings us one step closer to knowing what sperm whales are saying,” says David Gruber, lead and founder of Project CETI and distinguished professor of biology at the City University of New York.Today, CETI’s upcoming research aims to discern whether elements like rhythm, tempo, ornamentation, and rubato carry specific communicative intents, potentially providing insights into the “duality of patterning” — a linguistic phenomenon where simple elements combine to convey complex meanings previously thought unique to human language.Aliens among us“One of the intriguing aspects of our research is that it parallels the hypothetical scenario of contacting alien species. It’s about understanding a species with a completely different environment and communication protocols, where their interactions are distinctly different from human norms,” says Pratyusha Sharma, an MIT PhD student in EECS, CSAIL affiliate, and the study’s lead author. “We’re exploring how to interpret the basic units of meaning in their communication. This isn’t just about teaching animals a subset of human language, but decoding a naturally evolved communication system within their unique biological and environmental constraints. Essentially, our work could lay the groundwork for deciphering how an ‘alien civilization’ might communicate, providing insights into creating algorithms or systems to understand entirely unfamiliar forms of communication.”“Many animal species have repertoires of several distinct signals, but we are only beginning to uncover the extent to which they combine these signals to create new messages,” says Robert Seyfarth, a University of Pennsylvania professor emeritus of psychology who was not involved in the research. “Scientists are particularly interested in whether signal combinations vary according to the social or ecological context in which they are given, and the extent to which signal combinations follow discernible ‘rules’ that are recognized by listeners. The problem is particularly challenging in the case of marine mammals, because scientists usually cannot see their subjects or identify in complete detail the context of communication. Nonetheless, this paper offers new, tantalizing details of call combinations and the rules that underlie them in sperm whales.”Joining Sharma, Rus, and Gruber are two others from MIT, both CSAIL principal investigators and professors in EECS: Jacob Andreas and Antonio Torralba. They join Shane Gero, biology lead at CETI, founder of the Dominica Sperm Whale Project, and scientist-in residence at Carleton University. The paper was funded by Project CETI via Dalio Philanthropies and Ocean X, Sea Grape Foundation, Rosamund Zander/Hansjorg Wyss, and Chris Anderson/Jacqueline Novogratz through The Audacious Project: a collaborative funding initiative housed at TED, with further support from the J.H. and E.V. Wade Fund at MIT. More

  • in

    Fostering research, careers, and community in materials science

    Gabrielle Wood, a junior at Howard University majoring in chemical engineering, is on a mission to improve the sustainability and life cycles of natural resources and materials. Her work in the Materials Initiative for Comprehensive Research Opportunity (MICRO) program has given her hands-on experience with many different aspects of research, including MATLAB programming, experimental design, data analysis, figure-making, and scientific writing.Wood is also one of 10 undergraduates from 10 universities around the United States to participate in the first MICRO Summit earlier this year. The internship program, developed by the MIT Department of Materials Science and Engineering (DMSE), first launched in fall 2021. Now in its third year, the program continues to grow, providing even more opportunities for non-MIT undergraduate students — including the MICRO Summit and the program’s expansion to include Northwestern University.“I think one of the most valuable aspects of the MICRO program is the ability to do research long term with an experienced professor in materials science and engineering,” says Wood. “My school has limited opportunities for undergraduate research in sustainable polymers, so the MICRO program allowed me to gain valuable experience in this field, which I would not otherwise have.”Like Wood, Griheydi Garcia, a senior chemistry major at Manhattan College, values the exposure to materials science, especially since she is not able to learn as much about it at her home institution.“I learned a lot about crystallography and defects in materials through the MICRO curriculum, especially through videos,” says Garcia. “The research itself is very valuable, as well, because we get to apply what we’ve learned through the videos in the research we do remotely.”Expanding research opportunitiesFrom the beginning, the MICRO program was designed as a fully remote, rigorous education and mentoring program targeted toward students from underserved backgrounds interested in pursuing graduate school in materials science or related fields. Interns are matched with faculty to work on their specific research interests.Jessica Sandland ’99, PhD ’05, principal lecturer in DMSE and co-founder of MICRO, says that research projects for the interns are designed to be work that they can do remotely, such as developing a machine-learning algorithm or a data analysis approach.“It’s important to note that it’s not just about what the program and faculty are bringing to the student interns,” says Sandland, a member of the MIT Digital Learning Lab, a joint program between MIT Open Learning and the Institute’s academic departments. “The students are doing real research and work, and creating things of real value. It’s very much an exchange.”Cécile Chazot PhD ’22, now an assistant professor of materials science and engineering at Northwestern University, had helped to establish MICRO at MIT from the very beginning. Once at Northwestern, she quickly realized that expanding MICRO to Northwestern would offer even more research opportunities to interns than by relying on MIT alone — leveraging the university’s strong materials science and engineering department, as well as offering resources for biomaterials research through Northwestern’s medical school. The program received funding from 3M and officially launched at Northwestern in fall 2023. Approximately half of the MICRO interns are now in the program with MIT and half are with Northwestern. Wood and Garcia both participate in the program via Northwestern.“By expanding to another school, we’ve been able to have interns work with a much broader range of research projects,” says Chazot. “It has become easier for us to place students with faculty and research that match their interests.”Building communityThe MICRO program received a Higher Education Innovation grant from the Abdul Latif Jameel World Education Lab, part of MIT Open Learning, to develop an in-person summit. In January 2024, interns visited MIT for three days of presentations, workshops, and campus tours — including a tour of the MIT.nano building — as well as various community-building activities.“A big part of MICRO is the community,” says Chazot. “A highlight of the summit was just seeing the students come together.”The summit also included panel discussions that allowed interns to gain insights and advice from graduate students and professionals. The graduate panel discussion included MIT graduate students Sam Figueroa (mechanical engineering), Isabella Caruso (DMSE), and Eliana Feygin (DMSE). The career panel was led by Chazot and included Jatin Patil PhD ’23, head of product at SiTration; Maureen Reitman ’90, ScD ’93, group vice president and principal engineer at Exponent; Lucas Caretta PhD ’19, assistant professor of engineering at Brown University; Raquel D’Oyen ’90, who holds a PhD from Northwestern University and is a senior engineer at Raytheon; and Ashley Kaiser MS ’19, PhD ’21, senior process engineer at 6K.Students also had an opportunity to share their work with each other through research presentations. Their presentations covered a wide range of topics, including: developing a computer program to calculate solubility parameters for polymers used in textile manufacturing; performing a life-cycle analysis of a photonic chip and evaluating its environmental impact in comparison to a standard silicon microchip; and applying machine learning algorithms to scanning transmission electron microscopy images of CrSBr, a two-dimensional magnetic material. “The summit was wonderful and the best academic experience I have had as a first-year college student,” says MICRO intern Gabriella La Cour, who is pursuing a major in chemistry and dual degree biomedical engineering at Spelman College and participates in MICRO through MIT. “I got to meet so many students who were all in grades above me … and I learned a little about how to navigate college as an upperclassman.” “I actually have an extremely close friendship with one of the students, and we keep in touch regularly,” adds La Cour. “Professor Chazot gave valuable advice about applications and recommendation letters that will be useful when I apply to REUs [Research Experiences for Undergraduates] and graduate schools.”Looking to the future, MICRO organizers hope to continue to grow the program’s reach.“We would love to see other schools taking on this model,” says Sandland. “There are a lot of opportunities out there. The more departments, research groups, and mentors that get involved with this program, the more impact it can have.” More

  • in

    An AI dataset carves new paths to tornado detection

    The return of spring in the Northern Hemisphere touches off tornado season. A tornado’s twisting funnel of dust and debris seems an unmistakable sight. But that sight can be obscured to radar, the tool of meteorologists. It’s hard to know exactly when a tornado has formed, or even why.

    A new dataset could hold answers. It contains radar returns from thousands of tornadoes that have hit the United States in the past 10 years. Storms that spawned tornadoes are flanked by other severe storms, some with nearly identical conditions, that never did. MIT Lincoln Laboratory researchers who curated the dataset, called TorNet, have now released it open source. They hope to enable breakthroughs in detecting one of nature’s most mysterious and violent phenomena.

    “A lot of progress is driven by easily available, benchmark datasets. We hope TorNet will lay a foundation for machine learning algorithms to both detect and predict tornadoes,” says Mark Veillette, the project’s co-principal investigator with James Kurdzo. Both researchers work in the Air Traffic Control Systems Group. 

    Along with the dataset, the team is releasing models trained on it. The models show promise for machine learning’s ability to spot a twister. Building on this work could open new frontiers for forecasters, helping them provide more accurate warnings that might save lives. 

    Swirling uncertainty

    About 1,200 tornadoes occur in the United States every year, causing millions to billions of dollars in economic damage and claiming 71 lives on average. Last year, one unusually long-lasting tornado killed 17 people and injured at least 165 others along a 59-mile path in Mississippi.  

    Yet tornadoes are notoriously difficult to forecast because scientists don’t have a clear picture of why they form. “We can see two storms that look identical, and one will produce a tornado and one won’t. We don’t fully understand it,” Kurdzo says.

    A tornado’s basic ingredients are thunderstorms with instability caused by rapidly rising warm air and wind shear that causes rotation. Weather radar is the primary tool used to monitor these conditions. But tornadoes lay too low to be detected, even when moderately close to the radar. As the radar beam with a given tilt angle travels further from the antenna, it gets higher above the ground, mostly seeing reflections from rain and hail carried in the “mesocyclone,” the storm’s broad, rotating updraft. A mesocyclone doesn’t always produce a tornado.

    With this limited view, forecasters must decide whether or not to issue a tornado warning. They often err on the side of caution. As a result, the rate of false alarms for tornado warnings is more than 70 percent. “That can lead to boy-who-cried-wolf syndrome,” Kurdzo says.  

    In recent years, researchers have turned to machine learning to better detect and predict tornadoes. However, raw datasets and models have not always been accessible to the broader community, stifling progress. TorNet is filling this gap.

    The dataset contains more than 200,000 radar images, 13,587 of which depict tornadoes. The rest of the images are non-tornadic, taken from storms in one of two categories: randomly selected severe storms or false-alarm storms (those that led a forecaster to issue a warning but that didn’t produce a tornado).

    Each sample of a storm or tornado comprises two sets of six radar images. The two sets correspond to different radar sweep angles. The six images portray different radar data products, such as reflectivity (showing precipitation intensity) or radial velocity (indicating if winds are moving toward or away from the radar).

    A challenge in curating the dataset was first finding tornadoes. Within the corpus of weather radar data, tornadoes are extremely rare events. The team then had to balance those tornado samples with difficult non-tornado samples. If the dataset were too easy, say by comparing tornadoes to snowstorms, an algorithm trained on the data would likely over-classify storms as tornadic.

    “What’s beautiful about a true benchmark dataset is that we’re all working with the same data, with the same level of difficulty, and can compare results,” Veillette says. “It also makes meteorology more accessible to data scientists, and vice versa. It becomes easier for these two parties to work on a common problem.”

    Both researchers represent the progress that can come from cross-collaboration. Veillette is a mathematician and algorithm developer who has long been fascinated by tornadoes. Kurdzo is a meteorologist by training and a signal processing expert. In grad school, he chased tornadoes with custom-built mobile radars, collecting data to analyze in new ways.

    “This dataset also means that a grad student doesn’t have to spend a year or two building a dataset. They can jump right into their research,” Kurdzo says.

    This project was funded by Lincoln Laboratory’s Climate Change Initiative, which aims to leverage the laboratory’s diverse technical strengths to help address climate problems threatening human health and global security.

    Chasing answers with deep learning

    Using the dataset, the researchers developed baseline artificial intelligence (AI) models. They were particularly eager to apply deep learning, a form of machine learning that excels at processing visual data. On its own, deep learning can extract features (key observations that an algorithm uses to make a decision) from images across a dataset. Other machine learning approaches require humans to first manually label features. 

    “We wanted to see if deep learning could rediscover what people normally look for in tornadoes and even identify new things that typically aren’t searched for by forecasters,” Veillette says.

    The results are promising. Their deep learning model performed similar to or better than all tornado-detecting algorithms known in literature. The trained algorithm correctly classified 50 percent of weaker EF-1 tornadoes and over 85 percent of tornadoes rated EF-2 or higher, which make up the most devastating and costly occurrences of these storms.

    They also evaluated two other types of machine-learning models, and one traditional model to compare against. The source code and parameters of all these models are freely available. The models and dataset are also described in a paper submitted to a journal of the American Meteorological Society (AMS). Veillette presented this work at the AMS Annual Meeting in January.

    “The biggest reason for putting our models out there is for the community to improve upon them and do other great things,” Kurdzo says. “The best solution could be a deep learning model, or someone might find that a non-deep learning model is actually better.”

    TorNet could be useful in the weather community for others uses too, such as for conducting large-scale case studies on storms. It could also be augmented with other data sources, like satellite imagery or lightning maps. Fusing multiple types of data could improve the accuracy of machine learning models.

    Taking steps toward operations

    On top of detecting tornadoes, Kurdzo hopes that models might help unravel the science of why they form.

    “As scientists, we see all these precursors to tornadoes — an increase in low-level rotation, a hook echo in reflectivity data, specific differential phase (KDP) foot and differential reflectivity (ZDR) arcs. But how do they all go together? And are there physical manifestations we don’t know about?” he asks.

    Teasing out those answers might be possible with explainable AI. Explainable AI refers to methods that allow a model to provide its reasoning, in a format understandable to humans, of why it came to a certain decision. In this case, these explanations might reveal physical processes that happen before tornadoes. This knowledge could help train forecasters, and models, to recognize the signs sooner. 

    “None of this technology is ever meant to replace a forecaster. But perhaps someday it could guide forecasters’ eyes in complex situations, and give a visual warning to an area predicted to have tornadic activity,” Kurdzo says.

    Such assistance could be especially useful as radar technology improves and future networks potentially grow denser. Data refresh rates in a next-generation radar network are expected to increase from every five minutes to approximately one minute, perhaps faster than forecasters can interpret the new information. Because deep learning can process huge amounts of data quickly, it could be well-suited for monitoring radar returns in real time, alongside humans. Tornadoes can form and disappear in minutes.

    But the path to an operational algorithm is a long road, especially in safety-critical situations, Veillette says. “I think the forecaster community is still, understandably, skeptical of machine learning. One way to establish trust and transparency is to have public benchmark datasets like this one. It’s a first step.”

    The next steps, the team hopes, will be taken by researchers across the world who are inspired by the dataset and energized to build their own algorithms. Those algorithms will in turn go into test beds, where they’ll eventually be shown to forecasters, to start a process of transitioning into operations.

    In the end, the path could circle back to trust.

    “We may never get more than a 10- to 15-minute tornado warning using these tools. But if we could lower the false-alarm rate, we could start to make headway with public perception,” Kurdzo says. “People are going to use those warnings to take the action they need to save their lives.” More