More stories

  • in

    Scientists preserve DNA in an amber-like polymer

    In the movie “Jurassic Park,” scientists extracted DNA that had been preserved in amber for millions of years, and used it to create a population of long-extinct dinosaurs.Inspired partly by that film, MIT researchers have developed a glassy, amber-like polymer that can be used for long-term storage of DNA, whether entire human genomes or digital files such as photos.Most current methods for storing DNA require freezing temperatures, so they consume a great deal of energy and are not feasible in many parts of the world. In contrast, the new amber-like polymer can store DNA at room temperature while protecting the molecules from damage caused by heat or water.The researchers showed that they could use this polymer to store DNA sequences encoding the theme music from Jurassic Park, as well as an entire human genome. They also demonstrated that the DNA can be easily removed from the polymer without damaging it.“Freezing DNA is the number one way to preserve it, but it’s very expensive, and it’s not scalable,” says James Banal, a former MIT postdoc. “I think our new preservation method is going to be a technology that may drive the future of storing digital information on DNA.”Banal and Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, are the senior authors of the study, published yesterday in the Journal of the American Chemical Society. Former MIT postdoc Elizabeth Prince and MIT postdoc Ho Fung Cheng are the lead authors of the paper.Capturing DNADNA, a very stable molecule, is well-suited for storing massive amounts of information, including digital data. Digital storage systems encode text, photos, and other kind of information as a series of 0s and 1s. This same information can be encoded in DNA using the four nucleotides that make up the genetic code: A, T, G, and C. For example, G and C could be used to represent 0 while A and T represent 1.DNA offers a way to store this digital information at very high density: In theory, a coffee mug full of DNA could store all of the world’s data. DNA is also very stable and relatively easy to synthesize and sequence.In 2021, Banal and his postdoc advisor, Mark Bathe, an MIT professor of biological engineering, developed a way to store DNA in particles of silica, which could be labeled with tags that revealed the particles’ contents. That work led to a spinout called Cache DNA.One downside to that storage system is that it takes several days to embed DNA into the silica particles. Furthermore, removing the DNA from the particles requires hydrofluoric acid, which can be hazardous to workers handling the DNA.To come up with alternative storage materials, Banal began working with Johnson and members of his lab. Their idea was to use a type of polymer known as a degradable thermoset, which consists of polymers that form a solid when heated. The material also includes cleavable links that can be easily broken, allowing the polymer to be degraded in a controlled way.“With these deconstructable thermosets, depending on what cleavable bonds we put into them, we can choose how we want to degrade them,” Johnson says.For this project, the researchers decided to make their thermoset polymer from styrene and a cross-linker, which together form an amber-like thermoset called cross-linked polystyrene. This thermoset is also very hydrophobic, so it can prevent moisture from getting in and damaging the DNA. To make the thermoset degradable, the styrene monomers and cross-linkers are copolymerized with monomers called thionolactones. These links can be broken by treating them with a molecule called cysteamine.Because styrene is so hydrophobic, the researchers had to come up with a way to entice DNA — a hydrophilic, negatively charged molecule — into the styrene.To do that, they identified a combination of three monomers that they could turn into polymers that dissolve DNA by helping it interact with styrene. Each of the monomers has different features that cooperate to get the DNA out of water and into the styrene. There, the DNA forms spherical complexes, with charged DNA in the center and hydrophobic groups forming an outer layer that interacts with styrene. When heated, this solution becomes a solid glass-like block, embedded with DNA complexes.The researchers dubbed their method T-REX (Thermoset-REinforced Xeropreservation). The process of embedding DNA into the polymer network takes a few hours, but that could become shorter with further optimization, the researchers say.To release the DNA, the researchers first add cysteamine, which cleaves the bonds holding the polystyrene thermoset together, breaking it into smaller pieces. Then, a detergent called SDS can be added to remove the DNA from polystyrene without damaging it.Storing informationUsing these polymers, the researchers showed that they could encapsulate DNA of varying length, from tens of nucleotides up to an entire human genome (more than 50,000 base pairs). They were able to store DNA encoding the Emancipation Proclamation and the MIT logo, in addition to the theme music from “Jurassic Park.”After storing the DNA and then removing it, the researchers sequenced it and found that no errors had been introduced, which is a critical feature of any digital data storage system.The researchers also showed that the thermoset polymer can protect DNA from temperatures up to 75 degrees Celsius (167 degrees Fahrenheit). They are now working on ways to streamline the process of making the polymers and forming them into capsules for long-term storage.Cache DNA, a company started by Banal and Bathe, with Johnson as a member of the scientific advisory board, is now working on further developing DNA storage technology. The earliest application they envision is storing genomes for personalized medicine, and they also anticipate that these stored genomes could undergo further analysis as better technology is developed in the future.“The idea is, why don’t we preserve the master record of life forever?” Banal says. “Ten years or 20 years from now, when technology has advanced way more than we could ever imagine today, we could learn more and more things. We’re still in the very infancy of understanding the genome and how it relates to disease.”The research was funded by the National Science Foundation. More

  • in

    Researchers use large language models to help robots navigate

    Someday, you may want your home robot to carry a load of dirty clothes downstairs and deposit them in the washing machine in the far-left corner of the basement. The robot will need to combine your instructions with its visual observations to determine the steps it should take to complete this task.For an AI agent, this is easier said than done. Current approaches often utilize multiple hand-crafted machine-learning models to tackle different parts of the task, which require a great deal of human effort and expertise to build. These methods, which use visual representations to directly make navigation decisions, demand massive amounts of visual data for training, which are often hard to come by.To overcome these challenges, researchers from MIT and the MIT-IBM Watson AI Lab devised a navigation method that converts visual representations into pieces of language, which are then fed into one large language model that achieves all parts of the multistep navigation task.Rather than encoding visual features from images of a robot’s surroundings as visual representations, which is computationally intensive, their method creates text captions that describe the robot’s point-of-view. A large language model uses the captions to predict the actions a robot should take to fulfill a user’s language-based instructions.Because their method utilizes purely language-based representations, they can use a large language model to efficiently generate a huge amount of synthetic training data.While this approach does not outperform techniques that use visual features, it performs well in situations that lack enough visual data for training. The researchers found that combining their language-based inputs with visual signals leads to better navigation performance.“By purely using language as the perceptual representation, ours is a more straightforward approach. Since all the inputs can be encoded as language, we can generate a human-understandable trajectory,” says Bowen Pan, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this approach.Pan’s co-authors include his advisor, Aude Oliva, director of strategic industry engagement at the MIT Schwarzman College of Computing, MIT director of the MIT-IBM Watson AI Lab, and a senior research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL); Philip Isola, an associate professor of EECS and a member of CSAIL; senior author Yoon Kim, an assistant professor of EECS and a member of CSAIL; and others at the MIT-IBM Watson AI Lab and Dartmouth College. The research will be presented at the Conference of the North American Chapter of the Association for Computational Linguistics.Solving a vision problem with languageSince large language models are the most powerful machine-learning models available, the researchers sought to incorporate them into the complex task known as vision-and-language navigation, Pan says.But such models take text-based inputs and can’t process visual data from a robot’s camera. So, the team needed to find a way to use language instead.Their technique utilizes a simple captioning model to obtain text descriptions of a robot’s visual observations. These captions are combined with language-based instructions and fed into a large language model, which decides what navigation step the robot should take next.The large language model outputs a caption of the scene the robot should see after completing that step. This is used to update the trajectory history so the robot can keep track of where it has been.The model repeats these processes to generate a trajectory that guides the robot to its goal, one step at a time.To streamline the process, the researchers designed templates so observation information is presented to the model in a standard form — as a series of choices the robot can make based on its surroundings.For instance, a caption might say “to your 30-degree left is a door with a potted plant beside it, to your back is a small office with a desk and a computer,” etc. The model chooses whether the robot should move toward the door or the office.“One of the biggest challenges was figuring out how to encode this kind of information into language in a proper way to make the agent understand what the task is and how they should respond,” Pan says.Advantages of languageWhen they tested this approach, while it could not outperform vision-based techniques, they found that it offered several advantages.First, because text requires fewer computational resources to synthesize than complex image data, their method can be used to rapidly generate synthetic training data. In one test, they generated 10,000 synthetic trajectories based on 10 real-world, visual trajectories.The technique can also bridge the gap that can prevent an agent trained with a simulated environment from performing well in the real world. This gap often occurs because computer-generated images can appear quite different from real-world scenes due to elements like lighting or color. But language that describes a synthetic versus a real image would be much harder to tell apart, Pan says. Also, the representations their model uses are easier for a human to understand because they are written in natural language.“If the agent fails to reach its goal, we can more easily determine where it failed and why it failed. Maybe the history information is not clear enough or the observation ignores some important details,” Pan says.In addition, their method could be applied more easily to varied tasks and environments because it uses only one type of input. As long as data can be encoded as language, they can use the same model without making any modifications.But one disadvantage is that their method naturally loses some information that would be captured by vision-based models, such as depth information.However, the researchers were surprised to see that combining language-based representations with vision-based methods improves an agent’s ability to navigate.“Maybe this means that language can capture some higher-level information than cannot be captured with pure vision features,” he says.This is one area the researchers want to continue exploring. They also want to develop a navigation-oriented captioner that could boost the method’s performance. In addition, they want to probe the ability of large language models to exhibit spatial awareness and see how this could aid language-based navigation.This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    Making climate models relevant for local decision-makers

    Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. “It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. Traditional wisdomIn climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. “If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing power to run, while also being expensive. A little bit of both In their new paper, Saha and Ravela have figured out a way to add the data another way. They’ve employed a technique in machine learning called adversarial learning. It uses two machines: One generates data to go into our photo. But the other machine judges the sample by comparing it to actual data. If it thinks the image is fake, then the first machine has to try again until it convinces the second machine. The end-goal of the process is to create super-resolution data. Using machine learning techniques like adversarial learning is not a new idea in climate modeling; where it currently struggles is its inability to handle large amounts of basic physics, like conservation laws. The researchers discovered that simplifying the physics going in and supplementing it with statistics from the historical data was enough to generate the results they needed. “If you augment machine learning with some information from the statistics and simplified physics both, then suddenly, it’s magical,” says Ravela. He and Saha started with estimating extreme rainfall amounts by removing more complex physics equations and focusing on water vapor and land topography. They then generated general rainfall patterns for mountainous Denver and flat Chicago alike, applying historical accounts to correct the output. “It’s giving us extremes, like the physics does, at a much lower cost. And it’s giving us similar speeds to statistics, but at much higher resolution.” Another unexpected benefit of the results was how little training data was needed. “The fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the ML [machine learning] model … was actually not obvious from the beginning,” says Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run. Quantifying risk quicklyBeing able to run the models quickly and often is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: By seeing how extreme weather events will impact the country, decisions about what crops should be grown or where populations should migrate to can be made considering a very broad range of conditions and uncertainties as soon as possible.“We can’t wait months or years to be able to quantify this risk,” he says. “You need to look out way into the future and at a large number of uncertainties to be able to say what might be a good decision.”While the current model only looks at extreme precipitation, training it to examine other critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model, Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of a Climate Grand Challenges project.“We’re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,” he says.  More

  • in

    MIT Faculty Founder Initiative announces three winners of entrepreneurship awards

    Patients with intractable cancers, chronic pain sufferers, and people who depend on battery-powered medical implants may all benefit from the ideas presented at the 2023-24 MIT-Royalty Pharma Prize Competition’s recent awards. This year’s top prizes went to researchers and biotech entrepreneurs Anne Carpenter, Frederike Petzschner, and Betar Gallant ’08, SM ’10, PhD ’13.MIT Faculty Founder Initiative Executive Director Kit Hickey MBA ’13 describes the time and hard work the three awardees and other finalists devoted to the initiative and its mission of cultivating female faculty in biotech to cross the chasm between laboratory research and its clinical application.“They have taken the first brave step of getting off the bench when they already work seven days a week. They have carved out time from their facilities, from their labs, from their lives in order to put themselves out there and leap into entrepreneurship,” Hickey says. “They’ve done it because they each want to see their innovations out in the world improving patients’ lives.”Carpenter, senior director of the Imaging Platform at the Broad Institute of MIT and Harvard, where she is also an institute scientist, won the competition’s $250,000 2023-24 MIT-Royalty Pharma Faculty Founder Prize Competition Grand Prize. Carpenter specializes in using microscopy imaging of cells and computational methods such as machine learning to accelerate the identification of chemical compounds with therapeutic potential to, for instance, shrink tumors. The identified compounds are then tested in biological assays that model the tumor ecosystem to see how the compounds would perform on actual tumors.Carpenter’s startup, SyzOnc, launched in April, a feat Carpenter associates with the assistance provided by the MIT Faculty Founder Initiative. Participants in the program receive mentorship, stipends, and advice from industry experts, as well as help with incorporating, assembling a management team, fundraising, and intellectual property strategy.“The program offered key insights and input at major decision points that gave us the momentum to open our doors,” Carpenter says, adding that participating “offered validation of our scientific ideas and business plan. That kind of credibility is really helpful to raising funding, particularly for those starting their first company.”Carpenter says she and her team will employ “the best biological and computational advancements to develop new therapies to fight tumors such as sarcoma, pancreatic cancer, and glioblastoma, which currently have dismal survival rates.”The MIT Faculty Founder Initiative was begun in 2020 by the School of Engineering and the Martin Trust Center for MIT Entrepreneurship, based on research findings by Sangeeta Bhatia, the Wilson Professor of Health Sciences and Technology, professor of electrical engineering and computer science, and faculty director of the MIT Faculty Founder Initiative; Susan Hockfield, MIT Corporation life member, MIT president emerita, and professor of neuroscience; and Nancy Hopkins, professor emerita of biology. An investigation they conducted showed that only about 9 percent of MIT’s 250 biotech startups were started by women, whereas women made up 22 percent of the faculty, as was presented in a 2021 MIT Faculty Newsletter.That data showed that “technologies from female labs were not getting out in the world, resulting in lost potential,” Hickey says.“The MIT Faculty Founder Initiative plays a pivotal role in MIT’s entrepreneurship ecosystem. It elevates visionary faculty working on solutions in biotech by providing them with critical mentorship and resources, ensuring these solutions can be rapidly scaled to market,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science.The MIT Faculty Founder Initiative Prize Competition was launched in 2021. At this year’s competition, the judges represented academia, health care, biotech, and financial investment. In addition to awarding a grand prize, the competition also distributed two $100,000 prizes, one to a researcher from Brown University, the first university to collaborate with MIT in the entrepreneurship program.This year’s winner of the $100,000 2023-24 MIT-Royalty Pharma Faculty Founder Prize Competition Runner-Up Prize was Frederike Petzschner, assistant professor at the Carney Institute for Brain Science at Brown, for her SOMA startup’s digital pain management system, which helps sufferers to manage and relieve chronic pain.“We leverage cutting-edge technology to provide precision care, focusing specifically on personalized cognitive interventions tailored to each patient’s unique needs,” she says.With her startup on the verge of incorporating, Petzschner says, “without the Faculty Finder Initiative, our startup would still be pursuing commercialization, but undoubtedly at a much earlier and perhaps less structured stage.”“The constant support from the program organizers and our mentors was truly transformative,” she says.Gallant, associate professor of mechanical engineering at MIT and winner of the $100,000 2023-24 MIT-Royalty Pharma Faculty Founder Prize Competition Breakthrough Prize, is leading the startup Halogen. An expert on advanced battery technologies, Gallant and her team have developed high-density battery storage to improve the lifetime and performance of such medical devices as pacemakers.“If you can extend lifetime, you’re talking about longer times between invasive replacement surgeries, which really affects patient quality of life,” Gallant told MIT News in a 2022 interview.Jim Reddoch, executive vice president and chief scientific officer of sponsor Royalty Pharma, emphasized his company’s support for both the competition and the MIT Faculty Finder Initiative program.“Royalty Pharma is thrilled to support the 2023-2024 MIT-Royalty Pharma Prize Competition and accelerate life sciences innovation at leading research institutions such as MIT and Brown,” Reddoch says. “By supporting the amazing female entrepreneurs in this program, we hope to catalyze more ideas from the lab to biotech companies and eventually into the hands of patients.”Bhatia has referred to the MIT Faculty Founder Initiative as a “playbook” on how to direct female faculty’s high-impact technologies that are not being commercialized into the world of health care.“To me, changing the game means that when you have an invention in your lab, you’re connected enough to the ecosystem to know when it should be a company, and to know who to call and how to get your first investors and how to quickly catalyze your team — and you’re off to the races,” Bhatia says. “Every one one of those inventions can be a medicine as quickly as possible. That’s the future I imagine.”Co-founder Hockfield referred to MIT’s role in promoting entrepreneurship in remarks at the award ceremony, alluding to Brown University’s having joined the effort.“MIT has always been a leader in entrepreneurship,” Hockfield says. “Part of leading is sharing with the world. The collaboration with Brown University for this cohort shows that MIT can share our approach with the world, allowing other universities to follow our model of supporting academic entrepreneurship.”Hickey says that when she and Bhatia asked 30 female faculty members three years ago why they were not commercializing their technologies, many said they had no access to the appropriate networks of mentors, investors, role models, and business partners necessary to begin the journey.“We encourage you to become this network that has been missing,” Hickey told the awards event audience, which included an array of leaders in the biotech world. “Get to know our amazing faculty members and continue to support them. Become a part of this movement.” More

  • in

    A data-driven approach to making better choices

    Imagine a world in which some important decision — a judge’s sentencing recommendation, a child’s treatment protocol, which person or business should receive a loan — was made more reliable because a well-designed algorithm helped a key decision-maker arrive at a better choice. A new MIT economics course is investigating these interesting possibilities.Class 14.163 (Algorithms and Behavioral Science) is a new cross-disciplinary course focused on behavioral economics, which studies the cognitive capacities and limitations of human beings. The course was co-taught this past spring by assistant professor of economics Ashesh Rambachan and visiting lecturer Sendhil Mullainathan.Rambachan studies the economic applications of machine learning, focusing on algorithmic tools that drive decision-making in the criminal justice system and consumer lending markets. He also develops methods for determining causation using cross-sectional and dynamic data.Mullainathan will soon join the MIT departments of Electrical Engineering and Computer Science and Economics as a professor. His research uses machine learning to understand complex problems in human behavior, social policy, and medicine. Mullainathan co-founded the Abdul Latif Jameel Poverty Action Lab (J-PAL) in 2003.The new course’s goals are both scientific (to understand people) and policy-driven (to improve society by improving decisions). Rambachan believes that machine-learning algorithms provide new tools for both the scientific and applied goals of behavioral economics.“The course investigates the deployment of computer science, artificial intelligence (AI), economics, and machine learning in service of improved outcomes and reduced instances of bias in decision-making,” Rambachan says.There are opportunities, Rambachan believes, for constantly evolving digital tools like AI, machine learning, and large language models (LLMs) to help reshape everything from discriminatory practices in criminal sentencing to health-care outcomes among underserved populations.Students learn how to use machine learning tools with three main objectives: to understand what they do and how they do it, to formalize behavioral economics insights so they compose well within machine learning tools, and to understand areas and topics where the integration of behavioral economics and algorithmic tools might be most fruitful.Students also produce ideas, develop associated research, and see the bigger picture. They’re led to understand where an insight fits and see where the broader research agenda is leading. Participants can think critically about what supervised LLMs can (and cannot) do, to understand how to integrate those capacities with the models and insights of behavioral economics, and to recognize the most fruitful areas for the application of what investigations uncover.The dangers of subjectivity and biasAccording to Rambachan, behavioral economics acknowledges that biases and mistakes exist throughout our choices, even absent algorithms. “The data used by our algorithms exist outside computer science and machine learning, and instead are often produced by people,” he continues. “Understanding behavioral economics is therefore essential to understanding the effects of algorithms and how to better build them.”Rambachan sought to make the course accessible regardless of attendees’ academic backgrounds. The class included advanced degree students from a variety of disciplines.By offering students a cross-disciplinary, data-driven approach to investigating and discovering ways in which algorithms might improve problem-solving and decision-making, Rambachan hopes to build a foundation on which to redesign existing systems of jurisprudence, health care, consumer lending, and industry, to name a few areas.“Understanding how data are generated can help us understand bias,” Rambachan says. “We can ask questions about producing a better outcome than what currently exists.”Useful tools for re-imagining social operationsEconomics doctoral student Jimmy Lin was skeptical about the claims Rambachan and Mullainathan made when the class began, but changed his mind as the course continued.“Ashesh and Sendhil started with two provocative claims: The future of behavioral science research will not exist without AI, and the future of AI research will not exist without behavioral science,” Lin says. “Over the course of the semester, they deepened my understanding of both fields and walked us through numerous examples of how economics informed AI research and vice versa.”Lin, who’d previously done research in computational biology, praised the instructors’ emphasis on the importance of a “producer mindset,” thinking about the next decade of research rather than the previous decade. “That’s especially important in an area as interdisciplinary and fast-moving as the intersection of AI and economics — there isn’t an old established literature, so you’re forced to ask new questions, invent new methods, and create new bridges,” he says.The speed of change to which Lin alludes is a draw for him, too. “We’re seeing black-box AI methods facilitate breakthroughs in math, biology, physics, and other scientific disciplines,” Lin  says. “AI can change the way we approach intellectual discovery as researchers.”An interdisciplinary future for economics and social systemsStudying traditional economic tools and enhancing their value with AI may yield game-changing shifts in how institutions and organizations teach and empower leaders to make choices.“We’re learning to track shifts, to adjust frameworks and better understand how to deploy tools in service of a common language,” Rambachan says. “We must continually interrogate the intersection of human judgment, algorithms, AI, machine learning, and LLMs.”Lin enthusiastically recommended the course regardless of students’ backgrounds. “Anyone broadly interested in algorithms in society, applications of AI across academic disciplines, or AI as a paradigm for scientific discovery should take this class,” he says. “Every lecture felt like a goldmine of perspectives on research, novel application areas, and inspiration on how to produce new, exciting ideas.”The course, Rambachan says, argues that better-built algorithms can improve decision-making across disciplines. “By building connections between economics, computer science, and machine learning, perhaps we can automate the best of human choices to improve outcomes while minimizing or eliminating the worst,” he says.Lin remains excited about the course’s as-yet unexplored possibilities. “It’s a class that makes you excited about the future of research and your own role in it,” he says. More

  • in

    A technique for more effective multipurpose robots

    Let’s say you want to train a robot so it understands how to use tools and can then quickly learn to make repairs around your house with a hammer, wrench, and screwdriver. To do that, you would need an enormous amount of data demonstrating tool use.Existing robotic datasets vary widely in modality — some include color images while others are composed of tactile imprints, for instance. Data could also be collected in different domains, like simulation or human demos. And each dataset may capture a unique task and environment.It is difficult to efficiently incorporate data from so many sources in one machine-learning model, so many methods use just one type of data to train a robot. But robots trained this way, with a relatively small amount of task-specific data, are often unable to perform new tasks in unfamiliar environments.In an effort to train better multipurpose robots, MIT researchers developed a technique to combine multiple sources of data across domains, modalities, and tasks using a type of generative AI known as diffusion models.They train a separate diffusion model to learn a strategy, or policy, for completing one task using one specific dataset. Then they combine the policies learned by the diffusion models into a general policy that enables a robot to perform multiple tasks in various settings.In simulations and real-world experiments, this training approach enabled a robot to perform multiple tool-use tasks and adapt to new tasks it did not see during training. The method, known as Policy Composition (PoCo), led to a 20 percent improvement in task performance when compared to baseline techniques.“Addressing heterogeneity in robotic datasets is like a chicken-egg problem. If we want to use a lot of data to train general robot policies, then we first need deployable robots to get all this data. I think that leveraging all the heterogeneous data available, similar to what researchers have done with ChatGPT, is an important step for the robotics field,” says Lirui Wang, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on PoCo.     Wang’s coauthors include Jialiang Zhao, a mechanical engineering graduate student; Yilun Du, an EECS graduate student; Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering, and a member of CSAIL. The research will be presented at the Robotics: Science and Systems Conference.Combining disparate datasetsA robotic policy is a machine-learning model that takes inputs and uses them to perform an action. One way to think about a policy is as a strategy. In the case of a robotic arm, that strategy might be a trajectory, or a series of poses that move the arm so it picks up a hammer and uses it to pound a nail.Datasets used to learn robotic policies are typically small and focused on one particular task and environment, like packing items into boxes in a warehouse.“Every single robotic warehouse is generating terabytes of data, but it only belongs to that specific robot installation working on those packages. It is not ideal if you want to use all of these data to train a general machine,” Wang says.The MIT researchers developed a technique that can take a series of smaller datasets, like those gathered from many robotic warehouses, learn separate policies from each one, and combine the policies in a way that enables a robot to generalize to many tasks.They represent each policy using a type of generative AI model known as a diffusion model. Diffusion models, often used for image generation, learn to create new data samples that resemble samples in a training dataset by iteratively refining their output.But rather than teaching a diffusion model to generate images, the researchers teach it to generate a trajectory for a robot. They do this by adding noise to the trajectories in a training dataset. The diffusion model gradually removes the noise and refines its output into a trajectory.This technique, known as Diffusion Policy, was previously introduced by researchers at MIT, Columbia University, and the Toyota Research Institute. PoCo builds off this Diffusion Policy work. The team trains each diffusion model with a different type of dataset, such as one with human video demonstrations and another gleaned from teleoperation of a robotic arm.Then the researchers perform a weighted combination of the individual policies learned by all the diffusion models, iteratively refining the output so the combined policy satisfies the objectives of each individual policy.Greater than the sum of its parts“One of the benefits of this approach is that we can combine policies to get the best of both worlds. For instance, a policy trained on real-world data might be able to achieve more dexterity, while a policy trained on simulation might be able to achieve more generalization,” Wang says.

    With policy composition, researchers are able to combine datasets from multiple sources so they can teach a robot to effectively use a wide range of tools, like a hammer, screwdriver, or this spatula.Image: Courtesy of the researchers

    Because the policies are trained separately, one could mix and match diffusion policies to achieve better results for a certain task. A user could also add data in a new modality or domain by training an additional Diffusion Policy with that dataset, rather than starting the entire process from scratch.

    The policy composition technique the researchers developed can be used to effectively teach a robot to use tools even when objects are placed around it to try and distract it from its task, as seen here.Image: Courtesy of the researchers

    The researchers tested PoCo in simulation and on real robotic arms that performed a variety of tools tasks, such as using a hammer to pound a nail and flipping an object with a spatula. PoCo led to a 20 percent improvement in task performance compared to baseline methods.“The striking thing was that when we finished tuning and visualized it, we can clearly see that the composed trajectory looks much better than either one of them individually,” Wang says.In the future, the researchers want to apply this technique to long-horizon tasks where a robot would pick up one tool, use it, then switch to another tool. They also want to incorporate larger robotics datasets to improve performance.“We will need all three kinds of data to succeed for robotics: internet data, simulation data, and real robot data. How to combine them effectively will be the million-dollar question. PoCo is a solid step on the right track,” says Jim Fan, senior research scientist at NVIDIA and leader of the AI Agents Initiative, who was not involved with this work.This research is funded, in part, by Amazon, the Singapore Defense Science and Technology Agency, the U.S. National Science Foundation, and the Toyota Research Institute. More

  • in

    Looking for a specific action in a video? This AI-based method can find it for you

    The internet is awash in instructional videos that can teach curious viewers everything from cooking the perfect pancake to performing a life-saving Heimlich maneuver.But pinpointing when and where a particular action happens in a long video can be tedious. To streamline the process, scientists are trying to teach computers to perform this task. Ideally, a user could just describe the action they’re looking for, and an AI model would skip to its location in the video.However, teaching machine-learning models to do this usually requires a great deal of expensive video data that have been painstakingly hand-labeled.A new, more efficient approach from researchers at MIT and the MIT-IBM Watson AI Lab trains a model to perform this task, known as spatio-temporal grounding, using only videos and their automatically generated transcripts.The researchers teach a model to understand an unlabeled video in two distinct ways: by looking at small details to figure out where objects are located (spatial information) and looking at the bigger picture to understand when the action occurs (temporal information).Compared to other AI approaches, their method more accurately identifies actions in longer videos with multiple activities. Interestingly, they found that simultaneously training on spatial and temporal information makes a model better at identifying each individually.In addition to streamlining online learning and virtual training processes, this technique could also be useful in health care settings by rapidly finding key moments in videos of diagnostic procedures, for example.“We disentangle the challenge of trying to encode spatial and temporal information all at once and instead think about it like two experts working on their own, which turns out to be a more explicit way to encode the information. Our model, which combines these two separate branches, leads to the best performance,” says Brian Chen, lead author of a paper on this technique.Chen, a 2023 graduate of Columbia University who conducted this research while a visiting student at the MIT-IBM Watson AI Lab, is joined on the paper by James Glass, senior research scientist, member of the MIT-IBM Watson AI Lab, and head of the Spoken Language Systems Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); Hilde Kuehne, a member of the MIT-IBM Watson AI Lab who is also affiliated with Goethe University Frankfurt; and others at MIT, Goethe University, the MIT-IBM Watson AI Lab, and Quality Match GmbH. The research will be presented at the Conference on Computer Vision and Pattern Recognition.Global and local learningResearchers usually teach models to perform spatio-temporal grounding using videos in which humans have annotated the start and end times of particular tasks.Not only is generating these data expensive, but it can be difficult for humans to figure out exactly what to label. If the action is “cooking a pancake,” does that action start when the chef begins mixing the batter or when she pours it into the pan?“This time, the task may be about cooking, but next time, it might be about fixing a car. There are so many different domains for people to annotate. But if we can learn everything without labels, it is a more general solution,” Chen says.For their approach, the researchers use unlabeled instructional videos and accompanying text transcripts from a website like YouTube as training data. These don’t need any special preparation.They split the training process into two pieces. For one, they teach a machine-learning model to look at the entire video to understand what actions happen at certain times. This high-level information is called a global representation.For the second, they teach the model to focus on a specific region in parts of the video where action is happening. In a large kitchen, for instance, the model might only need to focus on the wooden spoon a chef is using to mix pancake batter, rather than the entire counter. This fine-grained information is called a local representation.The researchers incorporate an additional component into their framework to mitigate misalignments that occur between narration and video. Perhaps the chef talks about cooking the pancake first and performs the action later.To develop a more realistic solution, the researchers focused on uncut videos that are several minutes long. In contrast, most AI techniques train using few-second clips that someone trimmed to show only one action.A new benchmarkBut when they came to evaluate their approach, the researchers couldn’t find an effective benchmark for testing a model on these longer, uncut videos — so they created one.To build their benchmark dataset, the researchers devised a new annotation technique that works well for identifying multistep actions. They had users mark the intersection of objects, like the point where a knife edge cuts a tomato, rather than drawing a box around important objects.“This is more clearly defined and speeds up the annotation process, which reduces the human labor and cost,” Chen says.Plus, having multiple people do point annotation on the same video can better capture actions that occur over time, like the flow of milk being poured. All annotators won’t mark the exact same point in the flow of liquid.When they used this benchmark to test their approach, the researchers found that it was more accurate at pinpointing actions than other AI techniques.Their method was also better at focusing on human-object interactions. For instance, if the action is “serving a pancake,” many other approaches might focus only on key objects, like a stack of pancakes sitting on a counter. Instead, their method focuses on the actual moment when the chef flips a pancake onto a plate.Next, the researchers plan to enhance their approach so models can automatically detect when text and narration are not aligned, and switch focus from one modality to the other. They also want to extend their framework to audio data, since there are usually strong correlations between actions and the sounds objects make.“AI research has made incredible progress towards creating models like ChatGPT that understand images. But our progress on understanding video is far behind. This work represents a significant step forward in that direction,” says Kate Saenko, a professor in the Department of Computer Science at Boston University who was not involved with this work.This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    Turning up the heat on next-generation semiconductors

    The scorching surface of Venus, where temperatures can climb to 480 degrees Celsius (hot enough to melt lead), is an inhospitable place for humans and machines alike. One reason scientists have not yet been able to send a rover to the planet’s surface is because silicon-based electronics can’t operate in such extreme temperatures for an extended period of time.For high-temperature applications like Venus exploration, researchers have recently turned to gallium nitride, a unique material that can withstand temperatures of 500 degrees or more.The material is already used in some terrestrial electronics, like phone chargers and cell phone towers, but scientists don’t have a good grasp of how gallium nitride devices would behave at temperatures beyond 300 degrees, which is the operational limit of conventional silicon electronics.In a new paper published in Applied Physics Letters, which is part of a multiyear research effort, a team of scientists from MIT and elsewhere sought to answer key questions about the material’s properties and performance at extremely high temperatures.  They studied the impact of temperature on the ohmic contacts in a gallium nitride device. Ohmic contacts are key components that connect a semiconductor device with the outside world.The researchers found that extreme temperatures didn’t cause significant degradation to the gallium nitride material or contacts. They were surprised to see that the contacts remained structurally intact even when held at 500 degrees Celsius for 48 hours.Understanding how contacts perform at extreme temperatures is an important step toward the group’s next goal of developing high-performance transistors that could operate on the surface of Venus. Such transistors could also be used on Earth in electronics for applications like extracting geothermal energy or monitoring the inside of jet engines.“Transistors are the heart of most modern electronics, but we didn’t want to jump straight to making a gallium nitride transistor because so much could go wrong. We first wanted to make sure the material and contacts could survive, and figure out how much they change as you increase the temperature. We’ll design our transistor from these basic material building blocks,” says John Niroula, an electrical engineering and computer science (EECS) graduate student and lead author of the paper.His co-authors include Qingyun Xie PhD ’24; Mengyang Yuan PhD ’22; EECS graduate students Patrick K. Darmawi-Iskandar and Pradyot Yadav; Gillian K. Micale, a graduate student in the Department of Materials Science and Engineering; senior author Tomás Palacios, the Clarence J. LeBel Professor of EECS, director of the Microsystems Technology Laboratories, and a member of the Research Laboratory of Electronics; as well as collaborators Nitul S. Rajput of the Technology Innovation Institute of the United Arab Emirates; Siddharth Rajan of Ohio State University; Yuji Zhao of Rice University; and Nadim Chowdhury of Bangladesh University of Engineering and Technology.Turning up the heatWhile gallium nitride has recently attracted much attention, the material is still decades behind silicon when it comes to scientists’ understanding of how its properties change under different conditions. One such property is resistance, the flow of electrical current through a material.A device’s overall resistance is inversely proportional to its size. But devices like semiconductors have contacts that connect them to other electronics. Contact resistance, which is caused by these electrical connections, remains fixed no matter the size of the device. Too much contact resistance can lead to higher power dissipation and slower operating frequencies for electronic circuits.“Especially when you go to smaller dimensions, a device’s performance often ends up being limited by contact resistance. People have a relatively good understanding of contact resistance at room temperature, but no one has really studied what happens when you go all the way up to 500 degrees,” Niroula says.For their study, the researchers used facilities at MIT.nano to build gallium nitride devices known as transfer length method structures, which are composed of a series of resistors. These devices enable them to measure the resistance of both the material and the contacts.They added ohmic contacts to these devices using the two most common methods. The first involves depositing metal onto gallium nitride and heating it to 825 degrees Celsius for about 30 seconds, a process called annealing.The second method involves removing chunks of gallium nitride and using a high-temperature technology to regrow highly doped gallium nitride in its place, a process led by Rajan and his team at Ohio State. The highly doped material contains extra electrons that can contribute to current conduction.“The regrowth method typically leads to lower contact resistance at room temperature, but we wanted to see if these methods still work well at high temperatures,” Niroula says.A comprehensive approachThey tested devices in two ways. Their collaborators at Rice University, led by Zhao, conducted short-term tests by placing devices on a hot chuck that reached 500 degrees Celsius and taking immediate resistance measurements.At MIT, they conducted longer-term experiments by placing devices into a specialized furnace the group previously developed. They left devices inside for up to 72 hours to measure how resistance changes as a function of temperature and time.Microscopy experts at MIT.nano (Aubrey N. Penn) and the Technology Innovation Institute (Nitul S. Rajput) used state-of-the-art transmission electron microscopes to see how such high temperatures affect gallium nitride and the ohmic contacts at the atomic level.“We went in thinking the contacts or the gallium nitride material itself would degrade significantly, but we found the opposite. Contacts made with both methods seemed to be remarkably stable,” says Niroula.While it is difficult to measure resistance at such high temperatures, their results indicate that contact resistance seems to remain constant even at temperatures of 500 degrees, for around 48 hours. And just like at room temperature, the regrowth process led to better performance.The material did start to degrade after being in the furnace for 48 hours, but the researchers are already working to boost long-term performance. One strategy involves adding protective insulators to keep the material from being directly exposed to the high-temperature environment.Moving forward, the researchers plan to use what they learned in these experiments to develop high-temperature gallium nitride transistors.“In our group, we focus on innovative, device-level research to advance the frontiers of microelectronics, while adopting a systematic approach across the hierarchy, from the material level to the circuit level. Here, we have gone all the way down to the material level to understand things in depth. In other words, we have translated device-level advancements to circuit-level impact for high-temperature electronics, through design, modeling and complex fabrication. We are also immensely fortunate to have forged close partnerships with our longtime collaborators in this journey,” Xie says.This work was funded, in part, by the U.S. Air Force Office of Scientific Research, Lockheed Martin Corporation, the Semiconductor Research Corporation through the U.S. Defense Advanced Research Projects Agency, the U.S. Department of Energy, Intel Corporation, and the Bangladesh University of Engineering and Technology.Fabrication and microscopy were conducted at MIT.nano, the Semiconductor Epitaxy and Analysis Laboratory at Ohio State University, the Center for Advanced Materials Characterization at the University of Oregon, and the Technology Innovation Institute of the United Arab Emirates. More