More stories

  • in

    MIT researchers introduce generative AI for databases

    A new tool makes it easier for database users to perform complicated statistical analyses of tabular data without the need to know what is going on behind the scenes.GenSQL, a generative AI system for databases, could help users make predictions, detect anomalies, guess missing values, fix errors, or generate synthetic data with just a few keystrokes.For instance, if the system were used to analyze medical data from a patient who has always had high blood pressure, it could catch a blood pressure reading that is low for that particular patient but would otherwise be in the normal range.GenSQL automatically integrates a tabular dataset and a generative probabilistic AI model, which can account for uncertainty and adjust their decision-making based on new data.Moreover, GenSQL can be used to produce and analyze synthetic data that mimic the real data in a database. This could be especially useful in situations where sensitive data cannot be shared, such as patient health records, or when real data are sparse.This new tool is built on top of SQL, a programming language for database creation and manipulation that was introduced in the late 1970s and is used by millions of developers worldwide.“Historically, SQL taught the business world what a computer could do. They didn’t have to write custom programs, they just had to ask questions of a database in high-level language. We think that, when we move from just querying data to asking questions of models and data, we are going to need an analogous language that teaches people the coherent questions you can ask a computer that has a probabilistic model of the data,” says Vikash Mansinghka ’05, MEng ’09, PhD ’09, senior author of a paper introducing GenSQL and a principal research scientist and leader of the Probabilistic Computing Project in the MIT Department of Brain and Cognitive Sciences.When the researchers compared GenSQL to popular, AI-based approaches for data analysis, they found that it was not only faster but also produced more accurate results. Importantly, the probabilistic models used by GenSQL are explainable, so users can read and edit them.“Looking at the data and trying to find some meaningful patterns by just using some simple statistical rules might miss important interactions. You really want to capture the correlations and the dependencies of the variables, which can be quite complicated, in a model. With GenSQL, we want to enable a large set of users to query their data and their model without having to know all the details,” adds lead author Mathieu Huot, a research scientist in the Department of Brain and Cognitive Sciences and member of the Probabilistic Computing Project.They are joined on the paper by Matin Ghavami and Alexander Lew, MIT graduate students; Cameron Freer, a research scientist; Ulrich Schaechtel and Zane Shelby of Digital Garage; Martin Rinard, an MIT professor in the Department of Electrical Engineering and Computer Science and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Feras Saad ’15, MEng ’16, PhD ’22, an assistant professor at Carnegie Mellon University. The research was recently presented at the ACM Conference on Programming Language Design and Implementation.Combining models and databasesSQL, which stands for structured query language, is a programming language for storing and manipulating information in a database. In SQL, people can ask questions about data using keywords, such as by summing, filtering, or grouping database records.However, querying a model can provide deeper insights, since models can capture what data imply for an individual. For instance, a female developer who wonders if she is underpaid is likely more interested in what salary data mean for her individually than in trends from database records.The researchers noticed that SQL didn’t provide an effective way to incorporate probabilistic AI models, but at the same time, approaches that use probabilistic models to make inferences didn’t support complex database queries.They built GenSQL to fill this gap, enabling someone to query both a dataset and a probabilistic model using a straightforward yet powerful formal programming language.A GenSQL user uploads their data and probabilistic model, which the system automatically integrates. Then, she can run queries on data that also get input from the probabilistic model running behind the scenes. This not only enables more complex queries but can also provide more accurate answers.For instance, a query in GenSQL might be something like, “How likely is it that a developer from Seattle knows the programming language Rust?” Just looking at a correlation between columns in a database might miss subtle dependencies. Incorporating a probabilistic model can capture more complex interactions.   Plus, the probabilistic models GenSQL utilizes are auditable, so people can see which data the model uses for decision-making. In addition, these models provide measures of calibrated uncertainty along with each answer.For instance, with this calibrated uncertainty, if one queries the model for predicted outcomes of different cancer treatments for a patient from a minority group that is underrepresented in the dataset, GenSQL would tell the user that it is uncertain, and how uncertain it is, rather than overconfidently advocating for the wrong treatment.Faster and more accurate resultsTo evaluate GenSQL, the researchers compared their system to popular baseline methods that use neural networks. GenSQL was between 1.7 and 6.8 times faster than these approaches, executing most queries in a few milliseconds while providing more accurate results.They also applied GenSQL in two case studies: one in which the system identified mislabeled clinical trial data and the other in which it generated accurate synthetic data that captured complex relationships in genomics.Next, the researchers want to apply GenSQL more broadly to conduct largescale modeling of human populations. With GenSQL, they can generate synthetic data to draw inferences about things like health and salary while controlling what information is used in the analysis.They also want to make GenSQL easier to use and more powerful by adding new optimizations and automation to the system. In the long run, the researchers want to enable users to make natural language queries in GenSQL. Their goal is to eventually develop a ChatGPT-like AI expert one could talk to about any database, which grounds its answers using GenSQL queries.   This research is funded, in part, by the Defense Advanced Research Projects Agency (DARPA), Google, and the Siegel Family Foundation. More

  • in

    Fotini Christia named director of the Institute for Data, Systems, and Society

    Fotini Christia, the Ford International Professor of Social Sciences in the Department of Political Science, has been named the new director of the Institute for Data, Systems, and Society (IDSS), effective July 1.“Fotini is well-positioned to guide IDSS into the next chapter. With her tenure as the director of the Sociotechnical Systems Research Center and as an associate director of IDSS since 2020, she has actively forged connections between the social sciences, data science, and computation,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I eagerly anticipate the ways in which she will advance and champion IDSS in alignment with the spirit and mission of the Schwarzman College of Computing.”“Fotini’s profound expertise as a social scientist and her adept use of data science, computational tools, and novel methodologies to grasp the dynamics of societal evolution across diverse fields, makes her a natural fit to lead IDSS,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science.Christia’s research has focused on issues of conflict and cooperation in the Muslim world, for which she has conducted fieldwork in Afghanistan, Bosnia, Iraq, the Palestinian Territories, and Yemen, among others. More recently, her research has been directed at examining how to effectively integrate artificial intelligence tools in public policy.She was appointed the director of the Sociotechnical Systems Research Center (SSRC) and an associate director of IDSS in October 2020. SSRC, an interdisciplinary center housed within IDSS in the MIT Schwarzman College of Computing, focuses on the study of high-impact, complex societal challenges that shape our world.As part of IDSS, she is co-organizer of a cross-disciplinary research effort, the Initiative on Combatting Systemic Racism. Bringing together faculty and researchers from all of MIT’s five schools and the college, the initiative builds on extensive social science literature on systemic racism and uses big data to develop and harness computational tools that can help effect structural and normative change toward racial equity across housing, health care, policing, and social media. Christia is also chair of IDSS’s doctoral program in Social and Engineering Systems.Christia is the author of “Alliance Formation in Civil War” (Cambridge University Press, 2012), which was awarded the Luebbert Award for Best Book in Comparative Politics, the Lepgold Prize for Best Book in International Relations, and a Distinguished Book Award from the International Studies Association. She is co-editor with Graeme Blair (University of California, Los Angeles) and Jeremy Weinstein (incoming dean at Harvard Kennedy School) of “Crime, Insecurity, and Community Policing: Experiments on Building Trust,” forthcoming in August 2024 with Cambridge University Press.Her research has also appeared in Science, Nature Human Behavior, Review of Economic Studies, American Economic Journal: Applied Economics, NeurIPs, Communications Medicine, IEEE Transactions on Network Science and Engineering, American Political Science Review, and Annual Review of Political Science, among other journals. Her opinion pieces have been published in Foreign Affairs, The New York Times, The Washington Post, and The Boston Globe, among other outlets.A native of Greece, where she grew up in the port city of Salonika, Christia moved to the United States to attend college at Columbia University. She graduated magna cum laude in 2001 with a joint BA in economics–operations research and an MA in international affairs. She joined the MIT faculty in 2008 after receiving her PhD in public policy from Harvard University.Christia succeeds Noelle Selin, a professor in IDSS and the Department of Earth, Atmospheric, and Planetary Sciences. Selin has led IDSS as interim director for the 2023-24 academic year since July 2023, following Professor Martin Wainwright.“I am incredibly grateful to Noelle for serving as interim director this year. Her contributions in this role, as well as her time leading the Technology and Policy Program, have been invaluable. I’m delighted she will remain part of the IDSS community as a faculty member,” says Huttenlocher. More

  • in

    3 Questions: Catherine D’Ignazio on data science and a quest for justice

    As long as we apply data science to society, we should remember that our data may have flaws, biases, and absences. That is one motif of MIT Associate Professor Catherine D’Ignazio’s new book, “Counting Feminicide,” published this spring by the MIT Press. In it, D’Ignazio explores the world of Latin American activists who began using media accounts and other sources to tabulate how many women had been killed in their countries as the result of gender-based violence — and found that their own numbers differed greatly from official statistics.Some of these activists have become prominent public figures, and others less so, but all of them have produced work providing lessons about collecting data, sharing it, and applying data to projects supporting human liberty and dignity. Now, their stories are reaching a new audience thanks to D’Ignazio, an associate professor of urban science and planning in MIT’s Department of Urban Studies and Planning, and director of MIT’s Data and Feminism Lab. She is also hosting an ongoing, transnational book club about the work. MIT News spoke with D’Ignazio about the new book and how activists are expanding the traditional practice of data science.Q: What is your book about?A: Three things. It’s a book that documents the rise of data activism as a really interesting form of citizen data science. Increasingly, because of the availability of data and tools, gathering and doing your own data analysis is a growing form of social activism. We characterize it in the book as a citizenship practice. People are using data to make knowledge claims and put political demands out there for their institutions to respond to.Another takeaway is that from observing data activists, there are ways they approach data science that are very different from how it’s usually taught. Among other things, when undertaking work about inequality and violence, there’s a connection with the rows of data. It’s about memorializing people who have been lost. Mainstream data scientists can learn a lot from this.The third thing is about feminicide itself and missing information. The main reason people start collecting data about feminicide is because their institutions aren’t doing it. This includes our institutions here in the United States. We’re talking about violence against women that the state is neglecting to count, classify, or take action on. So, activists step into these gaps and do this to the best of their ability, and they have been quite effective. The media will go to the activists, who end up becoming authorities on feminicide.Q: Can you elaborate on the differences between the practices of these data activists and more standard data science?A: One difference is what I’ll call the intimacy and proximity to the rows of the data set. In conventional data science, when you’re analyzing data, typically you’re not also the data collector. However these activists and groups are involved across the entire pipeline. As a result, there’s a connection and humanization to each line of the data set. For example, there is a school nurse in Texas who runs the site Women Count USA, and she will spend many hours trying to find photographs of victims of feminicide, which represents unusual care paid to each row of a dataset.Another point is the sophistication that the data activists have around what their data represent and what the biases are in the data. In mainstream AI and data science, we’re still having conversations where people seem surprised that there is bias in datasets. But I was impressed with the critical sophistication with which the activists approached their data. They gather information from the media and are familiar with the biases media have, and are aware their data is not comprehensive but is still useful. We can hold those two things together. It’s often more comprehensive data than what the institutions themselves have or will release to the public.Q: You did not just chronicle the work of activists, but engaged with them as well, and report about that in the book. What did you work on with them?A: One big component in the book is the participatory technology development that we engaged in with the activists, and one chapter is a case study of our work with activists to co-design machine learning and AI technology that supports their work. Our team was brainstorming about a system for the activists that would automatically find cases, verify them, and put them right in the database. Interestingly, the activists pushed back on that. They did not want full automation. They felt being, in effect, witnesses is an important part of the work. The emotional burden is an important part of the work and very central to it, too. That’s not something I might always expect to hear from data scientists.Keeping the human in the loop also means the human makes the final decision over whether a specific item constitutes feminicide or not. Handling it like that aligns with the fact that there are multiple definitions of feminicide, which is a complicated thing from a computational perspective. The proliferation of definitions about what counts as feminicide is a reflection of the fact that this is an ongoing global, transnational conversation. Feminicide has been codified in many laws, especially in Latin American countries, but none of those single laws is definitive. And no single activist definition is definitive. People are creating this together, through dialogue and struggle, so any computational system has to be designed with that understanding of the democratic process in mind. More

  • in

    Arvind, longtime MIT professor and prolific computer scientist, dies at 77

    Arvind Mithal, the Charles W. and Jennifer C. Johnson Professor in Computer Science and Engineering at MIT, head of the faculty of computer science in the Department of Electrical Engineering and Computer Science (EECS), and a pillar of the MIT community, died on June 17. Arvind, who went by the mononym, was 77 years old.A prolific researcher who led the Computation Structures Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL), Arvind served on the MIT faculty for nearly five decades.“He was beloved by countless people across the MIT community and around the world who were inspired by his intellectual brilliance and zest for life,” President Sally Kornbluth wrote in a letter to the MIT community today.As a scientist, Arvind was well known for important contributions to dataflow computing, which seeks to optimize the flow of data to take advantage of parallelism, achieving faster and more efficient computation.In the last 25 years, his research interests broadened to include developing techniques and tools for formal modeling, high-level synthesis, and formal verification of complex digital devices like microprocessors and hardware accelerators, as well as memory models and cache coherence protocols for parallel computing architectures and programming languages.Those who knew Arvind describe him as a rare individual whose interests and expertise ranged from high-level, theoretical formal systems all the way down through languages and compilers to the gates and structures of silicon hardware.The applications of Arvind’s work are far-reaching, from reducing the amount of energy and space required by data centers to streamlining the design of more efficient multicore computer chips.“Arvind was both a tremendous scholar in the fields of computer architecture and programming languages and a dedicated teacher, who brought systems-level thinking to our students. He was also an exceptional academic leader, often leading changes in curriculum and contributing to the Engineering Council in meaningful and impactful ways. I will greatly miss his sage advice and wisdom,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science.“Arvind’s positive energy, together with his hearty laugh, brightened so many people’s lives. He was an enduring source of wise counsel for colleagues and for generations of students. With his deep commitment to academic excellence, he not only transformed research in computer architecture and parallel computing but also brought that commitment to his role as head of the computer science faculty in the EECS department. He left a lasting impact on all of us who had the privilege of working with him,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.Arvind developed an interest in parallel computing while he was a student at the Indian Institute of Technology in Kanpur, from which he received his bachelor’s degree in 1969. He earned a master’s degree and PhD in computer science in 1972 and 1973, respectively, from the University of Minnesota, where he studied operating systems and mathematical models of program behavior. He taught at the University of California at Irvine from 1974 to 1978 before joining the faculty at MIT.At MIT, Arvind’s group studied parallel computing and declarative programming languages, and he led the development of two parallel computing languages, Id and pH. He continued his work on these programming languages through the 1990s, publishing the book “Implicit Parallel Programming in pH” with co-author R.S. Nikhil in 2001, the culmination of more than 20 years of research.In addition to his research, Arvind was an important academic leader in EECS. He served as head of computer science faculty in the department and played a critical role in helping with the reorganization of EECS after the establishment of the MIT Schwarzman College of Computing.“Arvind was a force of nature, larger than life in every sense. His relentless positivity, unwavering optimism, boundless generosity, and exceptional strength as a researcher was truly inspiring and left a profound mark on all who had the privilege of knowing him. I feel enormous gratitude for the light he brought into our lives and his fundamental impact on our community,” says Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the director of CSAIL.His work on dataflow and parallel computing led to the Monsoon project in the late 1980s and early 1990s. Arvind’s group, in collaboration with Motorola, built 16 dataflow computing machines and developed their associated software. One Monsoon dataflow machine is now in the Computer History Museum in Mountain View, California.Arvind’s focus shifted in the 1990s when, as he explained in a 2012 interview for the Institute of Electrical and Electronics Engineers (IEEE), funding for research into parallel computing began to dry up.“Microprocessors were getting so much faster that people thought they didn’t need it,” he recalled.Instead, he began applying techniques his team had learned and developed for parallel programming to the principled design of digital hardware.In addition to mentoring students and junior colleagues at MIT, Arvind also advised universities and governments in many countries on research in parallel programming and semiconductor design.Based on his work on digital hardware design, Arvind founded Sandburst in 2000, a fabless manufacturing company for semiconductor chips. He served as the company’s president for two years before returning to the MIT faculty, while continuing as an advisor. Sandburst was later acquired by Broadcom.Arvind and his students also developed Bluespec, a programming language designed to automate the design of chips. Building off this work, he co-founded the startup Bluespec, Inc., in 2003, to develop practical tools that help engineers streamline device design.Over the past decade, he was dedicated to advancing undergraduate education at MIT by bringing modern design tools to courses 6.004 (Computation Structures) and 6.191 (Introduction to Deep Learning), and incorporating Minispec, a programming language that is closely related to Bluespec.Arvind was honored for these and other contributions to data flow and multithread computing, and the development of tools for the high-level synthesis of hardware, with membership in the National Academy of Engineering in 2008 and the American Academy of Arts and Sciences in 2012. He was also named a distinguished alumnus of IIT Kanpur, his undergraduate alma mater.“Arvind was more than a pillar of the EECS community and a titan of computer science; he was a beloved colleague and a treasured friend. Those of us with the remarkable good fortune to work and collaborate with Arvind are devastated by his sudden loss. His kindness and joviality were unwavering; his mentorship was thoughtful and well-considered; his guidance was priceless. We will miss Arvind deeply,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing and head of EECS.Among numerous other awards, including membership in the Indian National Academy of Sciences and fellowship in the Association for Computing Machinery and IEEE, he received the Harry H. Goode Memorial Award from IEEE in 2012, which honors significant contributions to theory or practice in the information processing field.A humble scientist, Arvind was the first to point out that these achievements were only possible because of his outstanding and brilliant collaborators. Chief among those collaborators were the undergraduate and graduate students he felt fortunate to work with at MIT. He maintained excellent relationships with them both professionally and personally, and valued these relationships more than the work they did together, according to family members.In summing up the key to his scientific success, Arvind put it this way in the 2012 IEEE interview: “Really, one has to do what one believes in. I think the level at which most of us work, it is not sustainable if you don’t enjoy it on a day-to-day basis. You can’t work on it just because of the results. You have to work on it because you say, ‘I have to know the answer to this,’” he said.He is survived by his wife, Gita Singh Mithal, their two sons Divakar ’01 and Prabhakar ’04, their wives Leena and Nisha, and two grandchildren, Maya and Vikram.  More

  • in

    MIT-Takeda Program wraps up with 16 publications, a patent, and nearly two dozen projects completed

    When the Takeda Pharmaceutical Co. and the MIT School of Engineering launched their collaboration focused on artificial intelligence in health care and drug development in February 2020, society was on the cusp of a globe-altering pandemic and AI was far from the buzzword it is today.As the program concludes, the world looks very different. AI has become a transformative technology across industries including health care and pharmaceuticals, while the pandemic has altered the way many businesses approach health care and changed how they develop and sell medicines.For both MIT and Takeda, the program has been a game-changer.When it launched, the collaborators hoped the program would help solve tangible, real-world problems. By its end, the program has yielded a catalog of new research papers, discoveries, and lessons learned, including a patent for a system that could improve the manufacturing of small-molecule medicines.Ultimately, the program allowed both entities to create a foundation for a world where AI and machine learning play a pivotal role in medicine, leveraging Takeda’s expertise in biopharmaceuticals and the MIT researchers’ deep understanding of AI and machine learning.“The MIT-Takeda Program has been tremendously impactful and is a shining example of what can be accomplished when experts in industry and academia work together to develop solutions,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “In addition to resulting in research that has advanced how we use AI and machine learning in health care, the program has opened up new opportunities for MIT faculty and students through fellowships, funding, and networking.”What made the program unique was that it was centered around several concrete challenges spanning drug development that Takeda needed help addressing. MIT faculty had the opportunity to select the projects based on their area of expertise and general interest, allowing them to explore new areas within health care and drug development.“It was focused on Takeda’s toughest business problems,” says Anne Heatherington, Takeda’s research and development chief data and technology officer and head of its Data Sciences Institute.“They were problems that colleagues were really struggling with on the ground,” adds Simon Davies, the executive director of the MIT-Takeda Program and Takeda’s global head of statistical and quantitative sciences. Takeda saw an opportunity to collaborate with MIT’s world-class researchers, who were working only a few blocks away. Takeda, a global pharmaceutical company with global headquarters in Japan, has its global business units and R&D center just down the street from the Institute.As part of the program, MIT faculty were able to select what issues they were interested in working on from a group of potential Takeda projects. Then, collaborative teams including MIT researchers and Takeda employees approached research questions in two rounds. Over the course of the program, collaborators worked on 22 projects focused on topics including drug discovery and research, clinical drug development, and pharmaceutical manufacturing. Over 80 MIT students and faculty joined more than 125 Takeda researchers and staff on teams addressing these research questions.The projects centered around not only hard problems, but also the potential for solutions to scale within Takeda or within the biopharmaceutical industry more broadly.Some of the program’s findings have already resulted in wider studies. One group’s results, for instance, showed that using artificial intelligence to analyze speech may allow for earlier detection of frontotemporal dementia, while making that diagnosis more quickly and inexpensively. Similar algorithmic analyses of speech in patients diagnosed with ALS may also help clinicians understand the progression of that disease. Takeda is continuing to test both AI applications.Other discoveries and AI models that resulted from the program’s research have already had an impact. Using a physical model and AI learning algorithms can help detect particle size, mix, and consistency for powdered, small-molecule medicines, for instance, speeding up production timelines. Based on their research under the program, collaborators have filed for a patent for that technology.For injectable medicines like vaccines, AI-enabled inspections can also reduce process time and false rejection rates. Replacing human visual inspections with AI processes has already shown measurable impact for the pharmaceutical company.Heatherington adds, “our lessons learned are really setting the stage for what we’re doing next, really embedding AI and gen-AI [generative AI] into everything that we do moving forward.”Over the course of the program, more than 150 Takeda researchers and staff also participated in educational programming organized by the Abdul Latif Jameel Clinic for Machine Learning in Health. In addition to providing research opportunities, the program funded 10 students through SuperUROP, the Advanced Undergraduate Research Opportunities Program, as well as two cohorts from the DHIVE health-care innovation program, part of the MIT Sandbox Innovation Fund Program.Though the formal program has ended, certain aspects of the collaboration will continue, such as the MIT-Takeda Fellows, which supports graduate students as they pursue groundbreaking research related to health and AI. During its run, the program supported 44 MIT-Takeda Fellows and will continue to support MIT students through an endowment fund. Organic collaboration between MIT and Takeda researchers will also carry forward. And the programs’ collaborators are working to create a model for similar academic and industry partnerships to widen the impact of this first-of-its-kind collaboration.  More

  • in

    Scientists preserve DNA in an amber-like polymer

    In the movie “Jurassic Park,” scientists extracted DNA that had been preserved in amber for millions of years, and used it to create a population of long-extinct dinosaurs.Inspired partly by that film, MIT researchers have developed a glassy, amber-like polymer that can be used for long-term storage of DNA, whether entire human genomes or digital files such as photos.Most current methods for storing DNA require freezing temperatures, so they consume a great deal of energy and are not feasible in many parts of the world. In contrast, the new amber-like polymer can store DNA at room temperature while protecting the molecules from damage caused by heat or water.The researchers showed that they could use this polymer to store DNA sequences encoding the theme music from Jurassic Park, as well as an entire human genome. They also demonstrated that the DNA can be easily removed from the polymer without damaging it.“Freezing DNA is the number one way to preserve it, but it’s very expensive, and it’s not scalable,” says James Banal, a former MIT postdoc. “I think our new preservation method is going to be a technology that may drive the future of storing digital information on DNA.”Banal and Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, are the senior authors of the study, published yesterday in the Journal of the American Chemical Society. Former MIT postdoc Elizabeth Prince and MIT postdoc Ho Fung Cheng are the lead authors of the paper.Capturing DNADNA, a very stable molecule, is well-suited for storing massive amounts of information, including digital data. Digital storage systems encode text, photos, and other kind of information as a series of 0s and 1s. This same information can be encoded in DNA using the four nucleotides that make up the genetic code: A, T, G, and C. For example, G and C could be used to represent 0 while A and T represent 1.DNA offers a way to store this digital information at very high density: In theory, a coffee mug full of DNA could store all of the world’s data. DNA is also very stable and relatively easy to synthesize and sequence.In 2021, Banal and his postdoc advisor, Mark Bathe, an MIT professor of biological engineering, developed a way to store DNA in particles of silica, which could be labeled with tags that revealed the particles’ contents. That work led to a spinout called Cache DNA.One downside to that storage system is that it takes several days to embed DNA into the silica particles. Furthermore, removing the DNA from the particles requires hydrofluoric acid, which can be hazardous to workers handling the DNA.To come up with alternative storage materials, Banal began working with Johnson and members of his lab. Their idea was to use a type of polymer known as a degradable thermoset, which consists of polymers that form a solid when heated. The material also includes cleavable links that can be easily broken, allowing the polymer to be degraded in a controlled way.“With these deconstructable thermosets, depending on what cleavable bonds we put into them, we can choose how we want to degrade them,” Johnson says.For this project, the researchers decided to make their thermoset polymer from styrene and a cross-linker, which together form an amber-like thermoset called cross-linked polystyrene. This thermoset is also very hydrophobic, so it can prevent moisture from getting in and damaging the DNA. To make the thermoset degradable, the styrene monomers and cross-linkers are copolymerized with monomers called thionolactones. These links can be broken by treating them with a molecule called cysteamine.Because styrene is so hydrophobic, the researchers had to come up with a way to entice DNA — a hydrophilic, negatively charged molecule — into the styrene.To do that, they identified a combination of three monomers that they could turn into polymers that dissolve DNA by helping it interact with styrene. Each of the monomers has different features that cooperate to get the DNA out of water and into the styrene. There, the DNA forms spherical complexes, with charged DNA in the center and hydrophobic groups forming an outer layer that interacts with styrene. When heated, this solution becomes a solid glass-like block, embedded with DNA complexes.The researchers dubbed their method T-REX (Thermoset-REinforced Xeropreservation). The process of embedding DNA into the polymer network takes a few hours, but that could become shorter with further optimization, the researchers say.To release the DNA, the researchers first add cysteamine, which cleaves the bonds holding the polystyrene thermoset together, breaking it into smaller pieces. Then, a detergent called SDS can be added to remove the DNA from polystyrene without damaging it.Storing informationUsing these polymers, the researchers showed that they could encapsulate DNA of varying length, from tens of nucleotides up to an entire human genome (more than 50,000 base pairs). They were able to store DNA encoding the Emancipation Proclamation and the MIT logo, in addition to the theme music from “Jurassic Park.”After storing the DNA and then removing it, the researchers sequenced it and found that no errors had been introduced, which is a critical feature of any digital data storage system.The researchers also showed that the thermoset polymer can protect DNA from temperatures up to 75 degrees Celsius (167 degrees Fahrenheit). They are now working on ways to streamline the process of making the polymers and forming them into capsules for long-term storage.Cache DNA, a company started by Banal and Bathe, with Johnson as a member of the scientific advisory board, is now working on further developing DNA storage technology. The earliest application they envision is storing genomes for personalized medicine, and they also anticipate that these stored genomes could undergo further analysis as better technology is developed in the future.“The idea is, why don’t we preserve the master record of life forever?” Banal says. “Ten years or 20 years from now, when technology has advanced way more than we could ever imagine today, we could learn more and more things. We’re still in the very infancy of understanding the genome and how it relates to disease.”The research was funded by the National Science Foundation. More

  • in

    Researchers use large language models to help robots navigate

    Someday, you may want your home robot to carry a load of dirty clothes downstairs and deposit them in the washing machine in the far-left corner of the basement. The robot will need to combine your instructions with its visual observations to determine the steps it should take to complete this task.For an AI agent, this is easier said than done. Current approaches often utilize multiple hand-crafted machine-learning models to tackle different parts of the task, which require a great deal of human effort and expertise to build. These methods, which use visual representations to directly make navigation decisions, demand massive amounts of visual data for training, which are often hard to come by.To overcome these challenges, researchers from MIT and the MIT-IBM Watson AI Lab devised a navigation method that converts visual representations into pieces of language, which are then fed into one large language model that achieves all parts of the multistep navigation task.Rather than encoding visual features from images of a robot’s surroundings as visual representations, which is computationally intensive, their method creates text captions that describe the robot’s point-of-view. A large language model uses the captions to predict the actions a robot should take to fulfill a user’s language-based instructions.Because their method utilizes purely language-based representations, they can use a large language model to efficiently generate a huge amount of synthetic training data.While this approach does not outperform techniques that use visual features, it performs well in situations that lack enough visual data for training. The researchers found that combining their language-based inputs with visual signals leads to better navigation performance.“By purely using language as the perceptual representation, ours is a more straightforward approach. Since all the inputs can be encoded as language, we can generate a human-understandable trajectory,” says Bowen Pan, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this approach.Pan’s co-authors include his advisor, Aude Oliva, director of strategic industry engagement at the MIT Schwarzman College of Computing, MIT director of the MIT-IBM Watson AI Lab, and a senior research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL); Philip Isola, an associate professor of EECS and a member of CSAIL; senior author Yoon Kim, an assistant professor of EECS and a member of CSAIL; and others at the MIT-IBM Watson AI Lab and Dartmouth College. The research will be presented at the Conference of the North American Chapter of the Association for Computational Linguistics.Solving a vision problem with languageSince large language models are the most powerful machine-learning models available, the researchers sought to incorporate them into the complex task known as vision-and-language navigation, Pan says.But such models take text-based inputs and can’t process visual data from a robot’s camera. So, the team needed to find a way to use language instead.Their technique utilizes a simple captioning model to obtain text descriptions of a robot’s visual observations. These captions are combined with language-based instructions and fed into a large language model, which decides what navigation step the robot should take next.The large language model outputs a caption of the scene the robot should see after completing that step. This is used to update the trajectory history so the robot can keep track of where it has been.The model repeats these processes to generate a trajectory that guides the robot to its goal, one step at a time.To streamline the process, the researchers designed templates so observation information is presented to the model in a standard form — as a series of choices the robot can make based on its surroundings.For instance, a caption might say “to your 30-degree left is a door with a potted plant beside it, to your back is a small office with a desk and a computer,” etc. The model chooses whether the robot should move toward the door or the office.“One of the biggest challenges was figuring out how to encode this kind of information into language in a proper way to make the agent understand what the task is and how they should respond,” Pan says.Advantages of languageWhen they tested this approach, while it could not outperform vision-based techniques, they found that it offered several advantages.First, because text requires fewer computational resources to synthesize than complex image data, their method can be used to rapidly generate synthetic training data. In one test, they generated 10,000 synthetic trajectories based on 10 real-world, visual trajectories.The technique can also bridge the gap that can prevent an agent trained with a simulated environment from performing well in the real world. This gap often occurs because computer-generated images can appear quite different from real-world scenes due to elements like lighting or color. But language that describes a synthetic versus a real image would be much harder to tell apart, Pan says. Also, the representations their model uses are easier for a human to understand because they are written in natural language.“If the agent fails to reach its goal, we can more easily determine where it failed and why it failed. Maybe the history information is not clear enough or the observation ignores some important details,” Pan says.In addition, their method could be applied more easily to varied tasks and environments because it uses only one type of input. As long as data can be encoded as language, they can use the same model without making any modifications.But one disadvantage is that their method naturally loses some information that would be captured by vision-based models, such as depth information.However, the researchers were surprised to see that combining language-based representations with vision-based methods improves an agent’s ability to navigate.“Maybe this means that language can capture some higher-level information than cannot be captured with pure vision features,” he says.This is one area the researchers want to continue exploring. They also want to develop a navigation-oriented captioner that could boost the method’s performance. In addition, they want to probe the ability of large language models to exhibit spatial awareness and see how this could aid language-based navigation.This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    Making climate models relevant for local decision-makers

    Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. “It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. Traditional wisdomIn climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. “If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing power to run, while also being expensive. A little bit of both In their new paper, Saha and Ravela have figured out a way to add the data another way. They’ve employed a technique in machine learning called adversarial learning. It uses two machines: One generates data to go into our photo. But the other machine judges the sample by comparing it to actual data. If it thinks the image is fake, then the first machine has to try again until it convinces the second machine. The end-goal of the process is to create super-resolution data. Using machine learning techniques like adversarial learning is not a new idea in climate modeling; where it currently struggles is its inability to handle large amounts of basic physics, like conservation laws. The researchers discovered that simplifying the physics going in and supplementing it with statistics from the historical data was enough to generate the results they needed. “If you augment machine learning with some information from the statistics and simplified physics both, then suddenly, it’s magical,” says Ravela. He and Saha started with estimating extreme rainfall amounts by removing more complex physics equations and focusing on water vapor and land topography. They then generated general rainfall patterns for mountainous Denver and flat Chicago alike, applying historical accounts to correct the output. “It’s giving us extremes, like the physics does, at a much lower cost. And it’s giving us similar speeds to statistics, but at much higher resolution.” Another unexpected benefit of the results was how little training data was needed. “The fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the ML [machine learning] model … was actually not obvious from the beginning,” says Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run. Quantifying risk quicklyBeing able to run the models quickly and often is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: By seeing how extreme weather events will impact the country, decisions about what crops should be grown or where populations should migrate to can be made considering a very broad range of conditions and uncertainties as soon as possible.“We can’t wait months or years to be able to quantify this risk,” he says. “You need to look out way into the future and at a large number of uncertainties to be able to say what might be a good decision.”While the current model only looks at extreme precipitation, training it to examine other critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model, Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of a Climate Grand Challenges project.“We’re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,” he says.  More