More stories

  • in

    Fotini Christia named director of the Institute for Data, Systems, and Society

    Fotini Christia, the Ford International Professor of Social Sciences in the Department of Political Science, has been named the new director of the Institute for Data, Systems, and Society (IDSS), effective July 1.“Fotini is well-positioned to guide IDSS into the next chapter. With her tenure as the director of the Sociotechnical Systems Research Center and as an associate director of IDSS since 2020, she has actively forged connections between the social sciences, data science, and computation,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I eagerly anticipate the ways in which she will advance and champion IDSS in alignment with the spirit and mission of the Schwarzman College of Computing.”“Fotini’s profound expertise as a social scientist and her adept use of data science, computational tools, and novel methodologies to grasp the dynamics of societal evolution across diverse fields, makes her a natural fit to lead IDSS,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science.Christia’s research has focused on issues of conflict and cooperation in the Muslim world, for which she has conducted fieldwork in Afghanistan, Bosnia, Iraq, the Palestinian Territories, and Yemen, among others. More recently, her research has been directed at examining how to effectively integrate artificial intelligence tools in public policy.She was appointed the director of the Sociotechnical Systems Research Center (SSRC) and an associate director of IDSS in October 2020. SSRC, an interdisciplinary center housed within IDSS in the MIT Schwarzman College of Computing, focuses on the study of high-impact, complex societal challenges that shape our world.As part of IDSS, she is co-organizer of a cross-disciplinary research effort, the Initiative on Combatting Systemic Racism. Bringing together faculty and researchers from all of MIT’s five schools and the college, the initiative builds on extensive social science literature on systemic racism and uses big data to develop and harness computational tools that can help effect structural and normative change toward racial equity across housing, health care, policing, and social media. Christia is also chair of IDSS’s doctoral program in Social and Engineering Systems.Christia is the author of “Alliance Formation in Civil War” (Cambridge University Press, 2012), which was awarded the Luebbert Award for Best Book in Comparative Politics, the Lepgold Prize for Best Book in International Relations, and a Distinguished Book Award from the International Studies Association. She is co-editor with Graeme Blair (University of California, Los Angeles) and Jeremy Weinstein (incoming dean at Harvard Kennedy School) of “Crime, Insecurity, and Community Policing: Experiments on Building Trust,” forthcoming in August 2024 with Cambridge University Press.Her research has also appeared in Science, Nature Human Behavior, Review of Economic Studies, American Economic Journal: Applied Economics, NeurIPs, Communications Medicine, IEEE Transactions on Network Science and Engineering, American Political Science Review, and Annual Review of Political Science, among other journals. Her opinion pieces have been published in Foreign Affairs, The New York Times, The Washington Post, and The Boston Globe, among other outlets.A native of Greece, where she grew up in the port city of Salonika, Christia moved to the United States to attend college at Columbia University. She graduated magna cum laude in 2001 with a joint BA in economics–operations research and an MA in international affairs. She joined the MIT faculty in 2008 after receiving her PhD in public policy from Harvard University.Christia succeeds Noelle Selin, a professor in IDSS and the Department of Earth, Atmospheric, and Planetary Sciences. Selin has led IDSS as interim director for the 2023-24 academic year since July 2023, following Professor Martin Wainwright.“I am incredibly grateful to Noelle for serving as interim director this year. Her contributions in this role, as well as her time leading the Technology and Policy Program, have been invaluable. I’m delighted she will remain part of the IDSS community as a faculty member,” says Huttenlocher. More

  • in

    Arvind, longtime MIT professor and prolific computer scientist, dies at 77

    Arvind Mithal, the Charles W. and Jennifer C. Johnson Professor in Computer Science and Engineering at MIT, head of the faculty of computer science in the Department of Electrical Engineering and Computer Science (EECS), and a pillar of the MIT community, died on June 17. Arvind, who went by the mononym, was 77 years old.A prolific researcher who led the Computation Structures Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL), Arvind served on the MIT faculty for nearly five decades.“He was beloved by countless people across the MIT community and around the world who were inspired by his intellectual brilliance and zest for life,” President Sally Kornbluth wrote in a letter to the MIT community today.As a scientist, Arvind was well known for important contributions to dataflow computing, which seeks to optimize the flow of data to take advantage of parallelism, achieving faster and more efficient computation.In the last 25 years, his research interests broadened to include developing techniques and tools for formal modeling, high-level synthesis, and formal verification of complex digital devices like microprocessors and hardware accelerators, as well as memory models and cache coherence protocols for parallel computing architectures and programming languages.Those who knew Arvind describe him as a rare individual whose interests and expertise ranged from high-level, theoretical formal systems all the way down through languages and compilers to the gates and structures of silicon hardware.The applications of Arvind’s work are far-reaching, from reducing the amount of energy and space required by data centers to streamlining the design of more efficient multicore computer chips.“Arvind was both a tremendous scholar in the fields of computer architecture and programming languages and a dedicated teacher, who brought systems-level thinking to our students. He was also an exceptional academic leader, often leading changes in curriculum and contributing to the Engineering Council in meaningful and impactful ways. I will greatly miss his sage advice and wisdom,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science.“Arvind’s positive energy, together with his hearty laugh, brightened so many people’s lives. He was an enduring source of wise counsel for colleagues and for generations of students. With his deep commitment to academic excellence, he not only transformed research in computer architecture and parallel computing but also brought that commitment to his role as head of the computer science faculty in the EECS department. He left a lasting impact on all of us who had the privilege of working with him,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.Arvind developed an interest in parallel computing while he was a student at the Indian Institute of Technology in Kanpur, from which he received his bachelor’s degree in 1969. He earned a master’s degree and PhD in computer science in 1972 and 1973, respectively, from the University of Minnesota, where he studied operating systems and mathematical models of program behavior. He taught at the University of California at Irvine from 1974 to 1978 before joining the faculty at MIT.At MIT, Arvind’s group studied parallel computing and declarative programming languages, and he led the development of two parallel computing languages, Id and pH. He continued his work on these programming languages through the 1990s, publishing the book “Implicit Parallel Programming in pH” with co-author R.S. Nikhil in 2001, the culmination of more than 20 years of research.In addition to his research, Arvind was an important academic leader in EECS. He served as head of computer science faculty in the department and played a critical role in helping with the reorganization of EECS after the establishment of the MIT Schwarzman College of Computing.“Arvind was a force of nature, larger than life in every sense. His relentless positivity, unwavering optimism, boundless generosity, and exceptional strength as a researcher was truly inspiring and left a profound mark on all who had the privilege of knowing him. I feel enormous gratitude for the light he brought into our lives and his fundamental impact on our community,” says Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the director of CSAIL.His work on dataflow and parallel computing led to the Monsoon project in the late 1980s and early 1990s. Arvind’s group, in collaboration with Motorola, built 16 dataflow computing machines and developed their associated software. One Monsoon dataflow machine is now in the Computer History Museum in Mountain View, California.Arvind’s focus shifted in the 1990s when, as he explained in a 2012 interview for the Institute of Electrical and Electronics Engineers (IEEE), funding for research into parallel computing began to dry up.“Microprocessors were getting so much faster that people thought they didn’t need it,” he recalled.Instead, he began applying techniques his team had learned and developed for parallel programming to the principled design of digital hardware.In addition to mentoring students and junior colleagues at MIT, Arvind also advised universities and governments in many countries on research in parallel programming and semiconductor design.Based on his work on digital hardware design, Arvind founded Sandburst in 2000, a fabless manufacturing company for semiconductor chips. He served as the company’s president for two years before returning to the MIT faculty, while continuing as an advisor. Sandburst was later acquired by Broadcom.Arvind and his students also developed Bluespec, a programming language designed to automate the design of chips. Building off this work, he co-founded the startup Bluespec, Inc., in 2003, to develop practical tools that help engineers streamline device design.Over the past decade, he was dedicated to advancing undergraduate education at MIT by bringing modern design tools to courses 6.004 (Computation Structures) and 6.191 (Introduction to Deep Learning), and incorporating Minispec, a programming language that is closely related to Bluespec.Arvind was honored for these and other contributions to data flow and multithread computing, and the development of tools for the high-level synthesis of hardware, with membership in the National Academy of Engineering in 2008 and the American Academy of Arts and Sciences in 2012. He was also named a distinguished alumnus of IIT Kanpur, his undergraduate alma mater.“Arvind was more than a pillar of the EECS community and a titan of computer science; he was a beloved colleague and a treasured friend. Those of us with the remarkable good fortune to work and collaborate with Arvind are devastated by his sudden loss. His kindness and joviality were unwavering; his mentorship was thoughtful and well-considered; his guidance was priceless. We will miss Arvind deeply,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing and head of EECS.Among numerous other awards, including membership in the Indian National Academy of Sciences and fellowship in the Association for Computing Machinery and IEEE, he received the Harry H. Goode Memorial Award from IEEE in 2012, which honors significant contributions to theory or practice in the information processing field.A humble scientist, Arvind was the first to point out that these achievements were only possible because of his outstanding and brilliant collaborators. Chief among those collaborators were the undergraduate and graduate students he felt fortunate to work with at MIT. He maintained excellent relationships with them both professionally and personally, and valued these relationships more than the work they did together, according to family members.In summing up the key to his scientific success, Arvind put it this way in the 2012 IEEE interview: “Really, one has to do what one believes in. I think the level at which most of us work, it is not sustainable if you don’t enjoy it on a day-to-day basis. You can’t work on it just because of the results. You have to work on it because you say, ‘I have to know the answer to this,’” he said.He is survived by his wife, Gita Singh Mithal, their two sons Divakar ’01 and Prabhakar ’04, their wives Leena and Nisha, and two grandchildren, Maya and Vikram.  More

  • in

    MIT-Takeda Program wraps up with 16 publications, a patent, and nearly two dozen projects completed

    When the Takeda Pharmaceutical Co. and the MIT School of Engineering launched their collaboration focused on artificial intelligence in health care and drug development in February 2020, society was on the cusp of a globe-altering pandemic and AI was far from the buzzword it is today.As the program concludes, the world looks very different. AI has become a transformative technology across industries including health care and pharmaceuticals, while the pandemic has altered the way many businesses approach health care and changed how they develop and sell medicines.For both MIT and Takeda, the program has been a game-changer.When it launched, the collaborators hoped the program would help solve tangible, real-world problems. By its end, the program has yielded a catalog of new research papers, discoveries, and lessons learned, including a patent for a system that could improve the manufacturing of small-molecule medicines.Ultimately, the program allowed both entities to create a foundation for a world where AI and machine learning play a pivotal role in medicine, leveraging Takeda’s expertise in biopharmaceuticals and the MIT researchers’ deep understanding of AI and machine learning.“The MIT-Takeda Program has been tremendously impactful and is a shining example of what can be accomplished when experts in industry and academia work together to develop solutions,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “In addition to resulting in research that has advanced how we use AI and machine learning in health care, the program has opened up new opportunities for MIT faculty and students through fellowships, funding, and networking.”What made the program unique was that it was centered around several concrete challenges spanning drug development that Takeda needed help addressing. MIT faculty had the opportunity to select the projects based on their area of expertise and general interest, allowing them to explore new areas within health care and drug development.“It was focused on Takeda’s toughest business problems,” says Anne Heatherington, Takeda’s research and development chief data and technology officer and head of its Data Sciences Institute.“They were problems that colleagues were really struggling with on the ground,” adds Simon Davies, the executive director of the MIT-Takeda Program and Takeda’s global head of statistical and quantitative sciences. Takeda saw an opportunity to collaborate with MIT’s world-class researchers, who were working only a few blocks away. Takeda, a global pharmaceutical company with global headquarters in Japan, has its global business units and R&D center just down the street from the Institute.As part of the program, MIT faculty were able to select what issues they were interested in working on from a group of potential Takeda projects. Then, collaborative teams including MIT researchers and Takeda employees approached research questions in two rounds. Over the course of the program, collaborators worked on 22 projects focused on topics including drug discovery and research, clinical drug development, and pharmaceutical manufacturing. Over 80 MIT students and faculty joined more than 125 Takeda researchers and staff on teams addressing these research questions.The projects centered around not only hard problems, but also the potential for solutions to scale within Takeda or within the biopharmaceutical industry more broadly.Some of the program’s findings have already resulted in wider studies. One group’s results, for instance, showed that using artificial intelligence to analyze speech may allow for earlier detection of frontotemporal dementia, while making that diagnosis more quickly and inexpensively. Similar algorithmic analyses of speech in patients diagnosed with ALS may also help clinicians understand the progression of that disease. Takeda is continuing to test both AI applications.Other discoveries and AI models that resulted from the program’s research have already had an impact. Using a physical model and AI learning algorithms can help detect particle size, mix, and consistency for powdered, small-molecule medicines, for instance, speeding up production timelines. Based on their research under the program, collaborators have filed for a patent for that technology.For injectable medicines like vaccines, AI-enabled inspections can also reduce process time and false rejection rates. Replacing human visual inspections with AI processes has already shown measurable impact for the pharmaceutical company.Heatherington adds, “our lessons learned are really setting the stage for what we’re doing next, really embedding AI and gen-AI [generative AI] into everything that we do moving forward.”Over the course of the program, more than 150 Takeda researchers and staff also participated in educational programming organized by the Abdul Latif Jameel Clinic for Machine Learning in Health. In addition to providing research opportunities, the program funded 10 students through SuperUROP, the Advanced Undergraduate Research Opportunities Program, as well as two cohorts from the DHIVE health-care innovation program, part of the MIT Sandbox Innovation Fund Program.Though the formal program has ended, certain aspects of the collaboration will continue, such as the MIT-Takeda Fellows, which supports graduate students as they pursue groundbreaking research related to health and AI. During its run, the program supported 44 MIT-Takeda Fellows and will continue to support MIT students through an endowment fund. Organic collaboration between MIT and Takeda researchers will also carry forward. And the programs’ collaborators are working to create a model for similar academic and industry partnerships to widen the impact of this first-of-its-kind collaboration.  More

  • in

    Researchers use large language models to help robots navigate

    Someday, you may want your home robot to carry a load of dirty clothes downstairs and deposit them in the washing machine in the far-left corner of the basement. The robot will need to combine your instructions with its visual observations to determine the steps it should take to complete this task.For an AI agent, this is easier said than done. Current approaches often utilize multiple hand-crafted machine-learning models to tackle different parts of the task, which require a great deal of human effort and expertise to build. These methods, which use visual representations to directly make navigation decisions, demand massive amounts of visual data for training, which are often hard to come by.To overcome these challenges, researchers from MIT and the MIT-IBM Watson AI Lab devised a navigation method that converts visual representations into pieces of language, which are then fed into one large language model that achieves all parts of the multistep navigation task.Rather than encoding visual features from images of a robot’s surroundings as visual representations, which is computationally intensive, their method creates text captions that describe the robot’s point-of-view. A large language model uses the captions to predict the actions a robot should take to fulfill a user’s language-based instructions.Because their method utilizes purely language-based representations, they can use a large language model to efficiently generate a huge amount of synthetic training data.While this approach does not outperform techniques that use visual features, it performs well in situations that lack enough visual data for training. The researchers found that combining their language-based inputs with visual signals leads to better navigation performance.“By purely using language as the perceptual representation, ours is a more straightforward approach. Since all the inputs can be encoded as language, we can generate a human-understandable trajectory,” says Bowen Pan, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this approach.Pan’s co-authors include his advisor, Aude Oliva, director of strategic industry engagement at the MIT Schwarzman College of Computing, MIT director of the MIT-IBM Watson AI Lab, and a senior research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL); Philip Isola, an associate professor of EECS and a member of CSAIL; senior author Yoon Kim, an assistant professor of EECS and a member of CSAIL; and others at the MIT-IBM Watson AI Lab and Dartmouth College. The research will be presented at the Conference of the North American Chapter of the Association for Computational Linguistics.Solving a vision problem with languageSince large language models are the most powerful machine-learning models available, the researchers sought to incorporate them into the complex task known as vision-and-language navigation, Pan says.But such models take text-based inputs and can’t process visual data from a robot’s camera. So, the team needed to find a way to use language instead.Their technique utilizes a simple captioning model to obtain text descriptions of a robot’s visual observations. These captions are combined with language-based instructions and fed into a large language model, which decides what navigation step the robot should take next.The large language model outputs a caption of the scene the robot should see after completing that step. This is used to update the trajectory history so the robot can keep track of where it has been.The model repeats these processes to generate a trajectory that guides the robot to its goal, one step at a time.To streamline the process, the researchers designed templates so observation information is presented to the model in a standard form — as a series of choices the robot can make based on its surroundings.For instance, a caption might say “to your 30-degree left is a door with a potted plant beside it, to your back is a small office with a desk and a computer,” etc. The model chooses whether the robot should move toward the door or the office.“One of the biggest challenges was figuring out how to encode this kind of information into language in a proper way to make the agent understand what the task is and how they should respond,” Pan says.Advantages of languageWhen they tested this approach, while it could not outperform vision-based techniques, they found that it offered several advantages.First, because text requires fewer computational resources to synthesize than complex image data, their method can be used to rapidly generate synthetic training data. In one test, they generated 10,000 synthetic trajectories based on 10 real-world, visual trajectories.The technique can also bridge the gap that can prevent an agent trained with a simulated environment from performing well in the real world. This gap often occurs because computer-generated images can appear quite different from real-world scenes due to elements like lighting or color. But language that describes a synthetic versus a real image would be much harder to tell apart, Pan says. Also, the representations their model uses are easier for a human to understand because they are written in natural language.“If the agent fails to reach its goal, we can more easily determine where it failed and why it failed. Maybe the history information is not clear enough or the observation ignores some important details,” Pan says.In addition, their method could be applied more easily to varied tasks and environments because it uses only one type of input. As long as data can be encoded as language, they can use the same model without making any modifications.But one disadvantage is that their method naturally loses some information that would be captured by vision-based models, such as depth information.However, the researchers were surprised to see that combining language-based representations with vision-based methods improves an agent’s ability to navigate.“Maybe this means that language can capture some higher-level information than cannot be captured with pure vision features,” he says.This is one area the researchers want to continue exploring. They also want to develop a navigation-oriented captioner that could boost the method’s performance. In addition, they want to probe the ability of large language models to exhibit spatial awareness and see how this could aid language-based navigation.This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    A data-driven approach to making better choices

    Imagine a world in which some important decision — a judge’s sentencing recommendation, a child’s treatment protocol, which person or business should receive a loan — was made more reliable because a well-designed algorithm helped a key decision-maker arrive at a better choice. A new MIT economics course is investigating these interesting possibilities.Class 14.163 (Algorithms and Behavioral Science) is a new cross-disciplinary course focused on behavioral economics, which studies the cognitive capacities and limitations of human beings. The course was co-taught this past spring by assistant professor of economics Ashesh Rambachan and visiting lecturer Sendhil Mullainathan.Rambachan studies the economic applications of machine learning, focusing on algorithmic tools that drive decision-making in the criminal justice system and consumer lending markets. He also develops methods for determining causation using cross-sectional and dynamic data.Mullainathan will soon join the MIT departments of Electrical Engineering and Computer Science and Economics as a professor. His research uses machine learning to understand complex problems in human behavior, social policy, and medicine. Mullainathan co-founded the Abdul Latif Jameel Poverty Action Lab (J-PAL) in 2003.The new course’s goals are both scientific (to understand people) and policy-driven (to improve society by improving decisions). Rambachan believes that machine-learning algorithms provide new tools for both the scientific and applied goals of behavioral economics.“The course investigates the deployment of computer science, artificial intelligence (AI), economics, and machine learning in service of improved outcomes and reduced instances of bias in decision-making,” Rambachan says.There are opportunities, Rambachan believes, for constantly evolving digital tools like AI, machine learning, and large language models (LLMs) to help reshape everything from discriminatory practices in criminal sentencing to health-care outcomes among underserved populations.Students learn how to use machine learning tools with three main objectives: to understand what they do and how they do it, to formalize behavioral economics insights so they compose well within machine learning tools, and to understand areas and topics where the integration of behavioral economics and algorithmic tools might be most fruitful.Students also produce ideas, develop associated research, and see the bigger picture. They’re led to understand where an insight fits and see where the broader research agenda is leading. Participants can think critically about what supervised LLMs can (and cannot) do, to understand how to integrate those capacities with the models and insights of behavioral economics, and to recognize the most fruitful areas for the application of what investigations uncover.The dangers of subjectivity and biasAccording to Rambachan, behavioral economics acknowledges that biases and mistakes exist throughout our choices, even absent algorithms. “The data used by our algorithms exist outside computer science and machine learning, and instead are often produced by people,” he continues. “Understanding behavioral economics is therefore essential to understanding the effects of algorithms and how to better build them.”Rambachan sought to make the course accessible regardless of attendees’ academic backgrounds. The class included advanced degree students from a variety of disciplines.By offering students a cross-disciplinary, data-driven approach to investigating and discovering ways in which algorithms might improve problem-solving and decision-making, Rambachan hopes to build a foundation on which to redesign existing systems of jurisprudence, health care, consumer lending, and industry, to name a few areas.“Understanding how data are generated can help us understand bias,” Rambachan says. “We can ask questions about producing a better outcome than what currently exists.”Useful tools for re-imagining social operationsEconomics doctoral student Jimmy Lin was skeptical about the claims Rambachan and Mullainathan made when the class began, but changed his mind as the course continued.“Ashesh and Sendhil started with two provocative claims: The future of behavioral science research will not exist without AI, and the future of AI research will not exist without behavioral science,” Lin says. “Over the course of the semester, they deepened my understanding of both fields and walked us through numerous examples of how economics informed AI research and vice versa.”Lin, who’d previously done research in computational biology, praised the instructors’ emphasis on the importance of a “producer mindset,” thinking about the next decade of research rather than the previous decade. “That’s especially important in an area as interdisciplinary and fast-moving as the intersection of AI and economics — there isn’t an old established literature, so you’re forced to ask new questions, invent new methods, and create new bridges,” he says.The speed of change to which Lin alludes is a draw for him, too. “We’re seeing black-box AI methods facilitate breakthroughs in math, biology, physics, and other scientific disciplines,” Lin  says. “AI can change the way we approach intellectual discovery as researchers.”An interdisciplinary future for economics and social systemsStudying traditional economic tools and enhancing their value with AI may yield game-changing shifts in how institutions and organizations teach and empower leaders to make choices.“We’re learning to track shifts, to adjust frameworks and better understand how to deploy tools in service of a common language,” Rambachan says. “We must continually interrogate the intersection of human judgment, algorithms, AI, machine learning, and LLMs.”Lin enthusiastically recommended the course regardless of students’ backgrounds. “Anyone broadly interested in algorithms in society, applications of AI across academic disciplines, or AI as a paradigm for scientific discovery should take this class,” he says. “Every lecture felt like a goldmine of perspectives on research, novel application areas, and inspiration on how to produce new, exciting ideas.”The course, Rambachan says, argues that better-built algorithms can improve decision-making across disciplines. “By building connections between economics, computer science, and machine learning, perhaps we can automate the best of human choices to improve outcomes while minimizing or eliminating the worst,” he says.Lin remains excited about the course’s as-yet unexplored possibilities. “It’s a class that makes you excited about the future of research and your own role in it,” he says. More

  • in

    A technique for more effective multipurpose robots

    Let’s say you want to train a robot so it understands how to use tools and can then quickly learn to make repairs around your house with a hammer, wrench, and screwdriver. To do that, you would need an enormous amount of data demonstrating tool use.Existing robotic datasets vary widely in modality — some include color images while others are composed of tactile imprints, for instance. Data could also be collected in different domains, like simulation or human demos. And each dataset may capture a unique task and environment.It is difficult to efficiently incorporate data from so many sources in one machine-learning model, so many methods use just one type of data to train a robot. But robots trained this way, with a relatively small amount of task-specific data, are often unable to perform new tasks in unfamiliar environments.In an effort to train better multipurpose robots, MIT researchers developed a technique to combine multiple sources of data across domains, modalities, and tasks using a type of generative AI known as diffusion models.They train a separate diffusion model to learn a strategy, or policy, for completing one task using one specific dataset. Then they combine the policies learned by the diffusion models into a general policy that enables a robot to perform multiple tasks in various settings.In simulations and real-world experiments, this training approach enabled a robot to perform multiple tool-use tasks and adapt to new tasks it did not see during training. The method, known as Policy Composition (PoCo), led to a 20 percent improvement in task performance when compared to baseline techniques.“Addressing heterogeneity in robotic datasets is like a chicken-egg problem. If we want to use a lot of data to train general robot policies, then we first need deployable robots to get all this data. I think that leveraging all the heterogeneous data available, similar to what researchers have done with ChatGPT, is an important step for the robotics field,” says Lirui Wang, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on PoCo.     Wang’s coauthors include Jialiang Zhao, a mechanical engineering graduate student; Yilun Du, an EECS graduate student; Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering, and a member of CSAIL. The research will be presented at the Robotics: Science and Systems Conference.Combining disparate datasetsA robotic policy is a machine-learning model that takes inputs and uses them to perform an action. One way to think about a policy is as a strategy. In the case of a robotic arm, that strategy might be a trajectory, or a series of poses that move the arm so it picks up a hammer and uses it to pound a nail.Datasets used to learn robotic policies are typically small and focused on one particular task and environment, like packing items into boxes in a warehouse.“Every single robotic warehouse is generating terabytes of data, but it only belongs to that specific robot installation working on those packages. It is not ideal if you want to use all of these data to train a general machine,” Wang says.The MIT researchers developed a technique that can take a series of smaller datasets, like those gathered from many robotic warehouses, learn separate policies from each one, and combine the policies in a way that enables a robot to generalize to many tasks.They represent each policy using a type of generative AI model known as a diffusion model. Diffusion models, often used for image generation, learn to create new data samples that resemble samples in a training dataset by iteratively refining their output.But rather than teaching a diffusion model to generate images, the researchers teach it to generate a trajectory for a robot. They do this by adding noise to the trajectories in a training dataset. The diffusion model gradually removes the noise and refines its output into a trajectory.This technique, known as Diffusion Policy, was previously introduced by researchers at MIT, Columbia University, and the Toyota Research Institute. PoCo builds off this Diffusion Policy work. The team trains each diffusion model with a different type of dataset, such as one with human video demonstrations and another gleaned from teleoperation of a robotic arm.Then the researchers perform a weighted combination of the individual policies learned by all the diffusion models, iteratively refining the output so the combined policy satisfies the objectives of each individual policy.Greater than the sum of its parts“One of the benefits of this approach is that we can combine policies to get the best of both worlds. For instance, a policy trained on real-world data might be able to achieve more dexterity, while a policy trained on simulation might be able to achieve more generalization,” Wang says.

    With policy composition, researchers are able to combine datasets from multiple sources so they can teach a robot to effectively use a wide range of tools, like a hammer, screwdriver, or this spatula.Image: Courtesy of the researchers

    Because the policies are trained separately, one could mix and match diffusion policies to achieve better results for a certain task. A user could also add data in a new modality or domain by training an additional Diffusion Policy with that dataset, rather than starting the entire process from scratch.

    The policy composition technique the researchers developed can be used to effectively teach a robot to use tools even when objects are placed around it to try and distract it from its task, as seen here.Image: Courtesy of the researchers

    The researchers tested PoCo in simulation and on real robotic arms that performed a variety of tools tasks, such as using a hammer to pound a nail and flipping an object with a spatula. PoCo led to a 20 percent improvement in task performance compared to baseline methods.“The striking thing was that when we finished tuning and visualized it, we can clearly see that the composed trajectory looks much better than either one of them individually,” Wang says.In the future, the researchers want to apply this technique to long-horizon tasks where a robot would pick up one tool, use it, then switch to another tool. They also want to incorporate larger robotics datasets to improve performance.“We will need all three kinds of data to succeed for robotics: internet data, simulation data, and real robot data. How to combine them effectively will be the million-dollar question. PoCo is a solid step on the right track,” says Jim Fan, senior research scientist at NVIDIA and leader of the AI Agents Initiative, who was not involved with this work.This research is funded, in part, by Amazon, the Singapore Defense Science and Technology Agency, the U.S. National Science Foundation, and the Toyota Research Institute. More

  • in

    Looking for a specific action in a video? This AI-based method can find it for you

    The internet is awash in instructional videos that can teach curious viewers everything from cooking the perfect pancake to performing a life-saving Heimlich maneuver.But pinpointing when and where a particular action happens in a long video can be tedious. To streamline the process, scientists are trying to teach computers to perform this task. Ideally, a user could just describe the action they’re looking for, and an AI model would skip to its location in the video.However, teaching machine-learning models to do this usually requires a great deal of expensive video data that have been painstakingly hand-labeled.A new, more efficient approach from researchers at MIT and the MIT-IBM Watson AI Lab trains a model to perform this task, known as spatio-temporal grounding, using only videos and their automatically generated transcripts.The researchers teach a model to understand an unlabeled video in two distinct ways: by looking at small details to figure out where objects are located (spatial information) and looking at the bigger picture to understand when the action occurs (temporal information).Compared to other AI approaches, their method more accurately identifies actions in longer videos with multiple activities. Interestingly, they found that simultaneously training on spatial and temporal information makes a model better at identifying each individually.In addition to streamlining online learning and virtual training processes, this technique could also be useful in health care settings by rapidly finding key moments in videos of diagnostic procedures, for example.“We disentangle the challenge of trying to encode spatial and temporal information all at once and instead think about it like two experts working on their own, which turns out to be a more explicit way to encode the information. Our model, which combines these two separate branches, leads to the best performance,” says Brian Chen, lead author of a paper on this technique.Chen, a 2023 graduate of Columbia University who conducted this research while a visiting student at the MIT-IBM Watson AI Lab, is joined on the paper by James Glass, senior research scientist, member of the MIT-IBM Watson AI Lab, and head of the Spoken Language Systems Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); Hilde Kuehne, a member of the MIT-IBM Watson AI Lab who is also affiliated with Goethe University Frankfurt; and others at MIT, Goethe University, the MIT-IBM Watson AI Lab, and Quality Match GmbH. The research will be presented at the Conference on Computer Vision and Pattern Recognition.Global and local learningResearchers usually teach models to perform spatio-temporal grounding using videos in which humans have annotated the start and end times of particular tasks.Not only is generating these data expensive, but it can be difficult for humans to figure out exactly what to label. If the action is “cooking a pancake,” does that action start when the chef begins mixing the batter or when she pours it into the pan?“This time, the task may be about cooking, but next time, it might be about fixing a car. There are so many different domains for people to annotate. But if we can learn everything without labels, it is a more general solution,” Chen says.For their approach, the researchers use unlabeled instructional videos and accompanying text transcripts from a website like YouTube as training data. These don’t need any special preparation.They split the training process into two pieces. For one, they teach a machine-learning model to look at the entire video to understand what actions happen at certain times. This high-level information is called a global representation.For the second, they teach the model to focus on a specific region in parts of the video where action is happening. In a large kitchen, for instance, the model might only need to focus on the wooden spoon a chef is using to mix pancake batter, rather than the entire counter. This fine-grained information is called a local representation.The researchers incorporate an additional component into their framework to mitigate misalignments that occur between narration and video. Perhaps the chef talks about cooking the pancake first and performs the action later.To develop a more realistic solution, the researchers focused on uncut videos that are several minutes long. In contrast, most AI techniques train using few-second clips that someone trimmed to show only one action.A new benchmarkBut when they came to evaluate their approach, the researchers couldn’t find an effective benchmark for testing a model on these longer, uncut videos — so they created one.To build their benchmark dataset, the researchers devised a new annotation technique that works well for identifying multistep actions. They had users mark the intersection of objects, like the point where a knife edge cuts a tomato, rather than drawing a box around important objects.“This is more clearly defined and speeds up the annotation process, which reduces the human labor and cost,” Chen says.Plus, having multiple people do point annotation on the same video can better capture actions that occur over time, like the flow of milk being poured. All annotators won’t mark the exact same point in the flow of liquid.When they used this benchmark to test their approach, the researchers found that it was more accurate at pinpointing actions than other AI techniques.Their method was also better at focusing on human-object interactions. For instance, if the action is “serving a pancake,” many other approaches might focus only on key objects, like a stack of pancakes sitting on a counter. Instead, their method focuses on the actual moment when the chef flips a pancake onto a plate.Next, the researchers plan to enhance their approach so models can automatically detect when text and narration are not aligned, and switch focus from one modality to the other. They also want to extend their framework to audio data, since there are usually strong correlations between actions and the sounds objects make.“AI research has made incredible progress towards creating models like ChatGPT that understand images. But our progress on understanding video is far behind. This work represents a significant step forward in that direction,” says Kate Saenko, a professor in the Department of Computer Science at Boston University who was not involved with this work.This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    Turning up the heat on next-generation semiconductors

    The scorching surface of Venus, where temperatures can climb to 480 degrees Celsius (hot enough to melt lead), is an inhospitable place for humans and machines alike. One reason scientists have not yet been able to send a rover to the planet’s surface is because silicon-based electronics can’t operate in such extreme temperatures for an extended period of time.For high-temperature applications like Venus exploration, researchers have recently turned to gallium nitride, a unique material that can withstand temperatures of 500 degrees or more.The material is already used in some terrestrial electronics, like phone chargers and cell phone towers, but scientists don’t have a good grasp of how gallium nitride devices would behave at temperatures beyond 300 degrees, which is the operational limit of conventional silicon electronics.In a new paper published in Applied Physics Letters, which is part of a multiyear research effort, a team of scientists from MIT and elsewhere sought to answer key questions about the material’s properties and performance at extremely high temperatures.  They studied the impact of temperature on the ohmic contacts in a gallium nitride device. Ohmic contacts are key components that connect a semiconductor device with the outside world.The researchers found that extreme temperatures didn’t cause significant degradation to the gallium nitride material or contacts. They were surprised to see that the contacts remained structurally intact even when held at 500 degrees Celsius for 48 hours.Understanding how contacts perform at extreme temperatures is an important step toward the group’s next goal of developing high-performance transistors that could operate on the surface of Venus. Such transistors could also be used on Earth in electronics for applications like extracting geothermal energy or monitoring the inside of jet engines.“Transistors are the heart of most modern electronics, but we didn’t want to jump straight to making a gallium nitride transistor because so much could go wrong. We first wanted to make sure the material and contacts could survive, and figure out how much they change as you increase the temperature. We’ll design our transistor from these basic material building blocks,” says John Niroula, an electrical engineering and computer science (EECS) graduate student and lead author of the paper.His co-authors include Qingyun Xie PhD ’24; Mengyang Yuan PhD ’22; EECS graduate students Patrick K. Darmawi-Iskandar and Pradyot Yadav; Gillian K. Micale, a graduate student in the Department of Materials Science and Engineering; senior author Tomás Palacios, the Clarence J. LeBel Professor of EECS, director of the Microsystems Technology Laboratories, and a member of the Research Laboratory of Electronics; as well as collaborators Nitul S. Rajput of the Technology Innovation Institute of the United Arab Emirates; Siddharth Rajan of Ohio State University; Yuji Zhao of Rice University; and Nadim Chowdhury of Bangladesh University of Engineering and Technology.Turning up the heatWhile gallium nitride has recently attracted much attention, the material is still decades behind silicon when it comes to scientists’ understanding of how its properties change under different conditions. One such property is resistance, the flow of electrical current through a material.A device’s overall resistance is inversely proportional to its size. But devices like semiconductors have contacts that connect them to other electronics. Contact resistance, which is caused by these electrical connections, remains fixed no matter the size of the device. Too much contact resistance can lead to higher power dissipation and slower operating frequencies for electronic circuits.“Especially when you go to smaller dimensions, a device’s performance often ends up being limited by contact resistance. People have a relatively good understanding of contact resistance at room temperature, but no one has really studied what happens when you go all the way up to 500 degrees,” Niroula says.For their study, the researchers used facilities at MIT.nano to build gallium nitride devices known as transfer length method structures, which are composed of a series of resistors. These devices enable them to measure the resistance of both the material and the contacts.They added ohmic contacts to these devices using the two most common methods. The first involves depositing metal onto gallium nitride and heating it to 825 degrees Celsius for about 30 seconds, a process called annealing.The second method involves removing chunks of gallium nitride and using a high-temperature technology to regrow highly doped gallium nitride in its place, a process led by Rajan and his team at Ohio State. The highly doped material contains extra electrons that can contribute to current conduction.“The regrowth method typically leads to lower contact resistance at room temperature, but we wanted to see if these methods still work well at high temperatures,” Niroula says.A comprehensive approachThey tested devices in two ways. Their collaborators at Rice University, led by Zhao, conducted short-term tests by placing devices on a hot chuck that reached 500 degrees Celsius and taking immediate resistance measurements.At MIT, they conducted longer-term experiments by placing devices into a specialized furnace the group previously developed. They left devices inside for up to 72 hours to measure how resistance changes as a function of temperature and time.Microscopy experts at MIT.nano (Aubrey N. Penn) and the Technology Innovation Institute (Nitul S. Rajput) used state-of-the-art transmission electron microscopes to see how such high temperatures affect gallium nitride and the ohmic contacts at the atomic level.“We went in thinking the contacts or the gallium nitride material itself would degrade significantly, but we found the opposite. Contacts made with both methods seemed to be remarkably stable,” says Niroula.While it is difficult to measure resistance at such high temperatures, their results indicate that contact resistance seems to remain constant even at temperatures of 500 degrees, for around 48 hours. And just like at room temperature, the regrowth process led to better performance.The material did start to degrade after being in the furnace for 48 hours, but the researchers are already working to boost long-term performance. One strategy involves adding protective insulators to keep the material from being directly exposed to the high-temperature environment.Moving forward, the researchers plan to use what they learned in these experiments to develop high-temperature gallium nitride transistors.“In our group, we focus on innovative, device-level research to advance the frontiers of microelectronics, while adopting a systematic approach across the hierarchy, from the material level to the circuit level. Here, we have gone all the way down to the material level to understand things in depth. In other words, we have translated device-level advancements to circuit-level impact for high-temperature electronics, through design, modeling and complex fabrication. We are also immensely fortunate to have forged close partnerships with our longtime collaborators in this journey,” Xie says.This work was funded, in part, by the U.S. Air Force Office of Scientific Research, Lockheed Martin Corporation, the Semiconductor Research Corporation through the U.S. Defense Advanced Research Projects Agency, the U.S. Department of Energy, Intel Corporation, and the Bangladesh University of Engineering and Technology.Fabrication and microscopy were conducted at MIT.nano, the Semiconductor Epitaxy and Analysis Laboratory at Ohio State University, the Center for Advanced Materials Characterization at the University of Oregon, and the Technology Innovation Institute of the United Arab Emirates. More