More stories

  • in

    Blueprint Labs launches a charter school research collaborative

    Over the past 30 years, charter schools have emerged as a prominent yet debated public school option. According to the National Center for Education Statistics, 7 percent of U.S. public school students were enrolled in charter schools in 2021, up from 4 percent in 2010. Amid this expansion, families and policymakers want to know more about charter school performance and its systemic impacts. While researchers have evaluated charter schools’ short-term effects on student outcomes, significant knowledge gaps still exist. 

    MIT Blueprint Labs aims to fill those gaps through its Charter School Research Collaborative, an initiative that brings together practitioners, policymakers, researchers, and funders to make research on charter schools more actionable, rigorous, and efficient. The collaborative will create infrastructure to streamline and fund high-quality, policy-relevant charter research. 

    Joshua Angrist, MIT Ford Professor of Economics and a Blueprint Labs co-founder and director, says that Blueprint Labs hopes “to increase [its] impact by working with a larger group of academic and practitioner partners.” A nonpartisan research lab, Blueprint’s mission is to produce the most rigorous evidence possible to inform policy and practice. Angrist notes, “The debate over charter schools is not always fact-driven. Our goal at the lab is to bring convincing evidence into these discussions.”

    Collaborative kickoff

    The collaborative launched with a two-day kickoff in November. Blueprint Labs welcomed researchers, practitioners, funders, and policymakers to MIT to lay the groundwork for the collaborative. Over 80 participants joined the event, including leaders of charter school organizations, researchers at top universities and institutes, and policymakers and advocates from a variety of organizations and education agencies. 

    Through a series of panels, presentations, and conversations, participants including Rhode Island Department of Education Commissioner Angélica Infante-Green, CEO of Noble Schools Constance Jones, former Knowledge is Power Program CEO Richard Barth, president and CEO of National Association of Charter School Authorizers Karega Rausch, and many others discussed critical topics in the charter school space. These conversations influenced the collaborative’s research agenda. 

    Several sessions also highlighted how to ensure that the research process includes diverse voices to generate actionable evidence. Panelists noted that researchers should be aware of the demands placed on practitioners and should carefully consider community contexts. In addition, collaborators should treat each other as equal partners. 

    Parag Pathak, the Class of 1922 Professor of Economics at MIT and a Blueprint Labs co-founder and director, explained the kickoff’s aims. “One of our goals today is to begin to forge connections between [attendees]. We hope that [their] conversations are the launching point for future collaborations,” he stated. Pathak also shared the next steps for the collaborative: “Beginning next year, we’ll start investing in new research using the agenda [developed at this event] as our guide. We will also support new partnerships between researchers and practitioners.”

    Research agenda

    The discussions at the kickoff informed the collaborative’s research agenda. A recent paper summarizing existing lottery-based research on charter school effectiveness by Sarah Cohodes, an associate professor of public policy at the University of Michigan, and Susha Roy, an associate policy researcher at the RAND Corp., also guides the agenda. Their review finds that in randomized evaluations, many charter schools increase students’ academic achievement. However, researchers have not yet studied charter schools’ impacts on long-term, behavioral, or health outcomes in depth, and rigorous, lottery-based research is currently limited to a handful of urban centers. 

    The current research agenda focuses on seven topics:

    the long-term effects of charter schools;
    the effect of charters on non-test score outcomes;
    which charter school practices have the largest effect on performance;
    how charter performance varies across different contexts;
    how charter school effects vary with demographic characteristics and student background;
    how charter schools impact non-student outcomes, like teacher retention; and
    how system-level factors, such as authorizing practices, impact charter school performance.
    As diverse stakeholders’ priorities continue to shift and the collaborative progresses, the research agenda will continue to evolve.

    Information for interested partners

    Opportunities exist for charter leaders, policymakers, researchers, and funders to engage with the collaborative. Stakeholders can apply for funding, help shape the research agenda, and develop new research partnerships. A competitive funding process will open this month.

    Those interested in receiving updates on the collaborative can fill out this form. Please direct questions to chartercollab@mitblueprintlabs.org. More

  • in

    New hope for early pancreatic cancer intervention via AI-based risk prediction

    The first documented case of pancreatic cancer dates back to the 18th century. Since then, researchers have undertaken a protracted and challenging odyssey to understand the elusive and deadly disease. To date, there is no better cancer treatment than early intervention. Unfortunately, the pancreas, nestled deep within the abdomen, is particularly elusive for early detection. 

    MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) scientists, alongside Limor Appelbaum, a staff scientist in the Department of Radiation Oncology at Beth Israel Deaconess Medical Center (BIDMC), were eager to better identify potential high-risk patients. They set out to develop two machine-learning models for early detection of pancreatic ductal adenocarcinoma (PDAC), the most common form of the cancer. To access a broad and diverse database, the team synced up with a federated network company, using electronic health record data from various institutions across the United States. This vast pool of data helped ensure the models’ reliability and generalizability, making them applicable across a wide range of populations, geographical locations, and demographic groups.

    The two models — the “PRISM” neural network, and the logistic regression model (a statistical technique for probability), outperformed current methods. The team’s comparison showed that while standard screening criteria identify about 10 percent of PDAC cases using a five-times higher relative risk threshold, Prism can detect 35 percent of PDAC cases at this same threshold. 

    Using AI to detect cancer risk is not a new phenomena — algorithms analyze mammograms, CT scans for lung cancer, and assist in the analysis of Pap smear tests and HPV testing, to name a few applications. “The PRISM models stand out for their development and validation on an extensive database of over 5 million patients, surpassing the scale of most prior research in the field,” says Kai Jia, an MIT PhD student in electrical engineering and computer science (EECS), MIT CSAIL affiliate, and first author on an open-access paper in eBioMedicine outlining the new work. “The model uses routine clinical and lab data to make its predictions, and the diversity of the U.S. population is a significant advancement over other PDAC models, which are usually confined to specific geographic regions, like a few health-care centers in the U.S. Additionally, using a unique regularization technique in the training process enhanced the models’ generalizability and interpretability.” 

    “This report outlines a powerful approach to use big data and artificial intelligence algorithms to refine our approach to identifying risk profiles for cancer,” says David Avigan, a Harvard Medical School professor and the cancer center director and chief of hematology and hematologic malignancies at BIDMC, who was not involved in the study. “This approach may lead to novel strategies to identify patients with high risk for malignancy that may benefit from focused screening with the potential for early intervention.” 

    Prismatic perspectives

    The journey toward the development of PRISM began over six years ago, fueled by firsthand experiences with the limitations of current diagnostic practices. “Approximately 80-85 percent of pancreatic cancer patients are diagnosed at advanced stages, where cure is no longer an option,” says senior author Appelbaum, who is also a Harvard Medical School instructor as well as radiation oncologist. “This clinical frustration sparked the idea to delve into the wealth of data available in electronic health records (EHRs).”The CSAIL group’s close collaboration with Appelbaum made it possible to understand the combined medical and machine learning aspects of the problem better, eventually leading to a much more accurate and transparent model. “The hypothesis was that these records contained hidden clues — subtle signs and symptoms that could act as early warning signals of pancreatic cancer,” she adds. “This guided our use of federated EHR networks in developing these models, for a scalable approach for deploying risk prediction tools in health care.”Both PrismNN and PrismLR models analyze EHR data, including patient demographics, diagnoses, medications, and lab results, to assess PDAC risk. PrismNN uses artificial neural networks to detect intricate patterns in data features like age, medical history, and lab results, yielding a risk score for PDAC likelihood. PrismLR uses logistic regression for a simpler analysis, generating a probability score of PDAC based on these features. Together, the models offer a thorough evaluation of different approaches in predicting PDAC risk from the same EHR data.

    One paramount point for gaining the trust of physicians, the team notes, is better understanding how the models work, known in the field as interpretability. The scientists pointed out that while logistic regression models are inherently easier to interpret, recent advancements have made deep neural networks somewhat more transparent. This helped the team to refine the thousands of potentially predictive features derived from EHR of a single patient to approximately 85 critical indicators. These indicators, which include patient age, diabetes diagnosis, and an increased frequency of visits to physicians, are automatically discovered by the model but match physicians’ understanding of risk factors associated with pancreatic cancer. 

    The path forward

    Despite the promise of the PRISM models, as with all research, some parts are still a work in progress. U.S. data alone are the current diet for the models, necessitating testing and adaptation for global use. The path forward, the team notes, includes expanding the model’s applicability to international datasets and integrating additional biomarkers for more refined risk assessment.

    “A subsequent aim for us is to facilitate the models’ implementation in routine health care settings. The vision is to have these models function seamlessly in the background of health care systems, automatically analyzing patient data and alerting physicians to high-risk cases without adding to their workload,” says Jia. “A machine-learning model integrated with the EHR system could empower physicians with early alerts for high-risk patients, potentially enabling interventions well before symptoms manifest. We are eager to deploy our techniques in the real world to help all individuals enjoy longer, healthier lives.” 

    Jia wrote the paper alongside Applebaum and MIT EECS Professor and CSAIL Principal Investigator Martin Rinard, who are both senior authors of the paper. Researchers on the paper were supported during their time at MIT CSAIL, in part, by the Defense Advanced Research Projects Agency, Boeing, the National Science Foundation, and Aarno Labs. TriNetX provided resources for the project, and the Prevent Cancer Foundation also supported the team. More

  • in

    Self-powered sensor automatically harvests magnetic energy

    MIT researchers have developed a battery-free, self-powered sensor that can harvest energy from its environment.

    Because it requires no battery that must be recharged or replaced, and because it requires no special wiring, such a sensor could be embedded in a hard-to-reach place, like inside the inner workings of a ship’s engine. There, it could automatically gather data on the machine’s power consumption and operations for long periods of time.

    The researchers built a temperature-sensing device that harvests energy from the magnetic field generated in the open air around a wire. One could simply clip the sensor around a wire that carries electricity — perhaps the wire that powers a motor — and it will automatically harvest and store energy which it uses to monitor the motor’s temperature.

    “This is ambient power — energy that I don’t have to make a specific, soldered connection to get. And that makes this sensor very easy to install,” says Steve Leeb, the Emanuel E. Landsman Professor of Electrical Engineering and Computer Science (EECS) and professor of mechanical engineering, a member of the Research Laboratory of Electronics, and senior author of a paper on the energy-harvesting sensor.

    In the paper, which appeared as the featured article in the January issue of the IEEE Sensors Journal, the researchers offer a design guide for an energy-harvesting sensor that lets an engineer balance the available energy in the environment with their sensing needs.

    The paper lays out a roadmap for the key components of a device that can sense and control the flow of energy continually during operation.

    The versatile design framework is not limited to sensors that harvest magnetic field energy, and can be applied to those that use other power sources, like vibrations or sunlight. It could be used to build networks of sensors for factories, warehouses, and commercial spaces that cost less to install and maintain.

    “We have provided an example of a battery-less sensor that does something useful, and shown that it is a practically realizable solution. Now others will hopefully use our framework to get the ball rolling to design their own sensors,” says lead author Daniel Monagle, an EECS graduate student.

    Monagle and Leeb are joined on the paper by EECS graduate student Eric Ponce.

    John Donnal, an associate professor of weapons and controls engineering at the U.S. Naval Academy who was not involved with this work, studies techniques to monitor ship systems. Getting access to power on a ship can be difficult, he says, since there are very few outlets and strict restrictions as to what equipment can be plugged in.

    “Persistently measuring the vibration of a pump, for example, could give the crew real-time information on the health of the bearings and mounts, but powering a retrofit sensor often requires so much additional infrastructure that the investment is not worthwhile,” Donnal adds. “Energy-harvesting systems like this could make it possible to retrofit a wide variety of diagnostic sensors on ships and significantly reduce the overall cost of maintenance.”

    A how-to guide

    The researchers had to meet three key challenges to develop an effective, battery-free, energy-harvesting sensor.

    First, the system must be able to cold start, meaning it can fire up its electronics with no initial voltage. They accomplished this with a network of integrated circuits and transistors that allow the system to store energy until it reaches a certain threshold. The system will only turn on once it has stored enough power to fully operate.

    Second, the system must store and convert the energy it harvests efficiently, and without a battery. While the researchers could have included a battery, that would add extra complexities to the system and could pose a fire risk.

    “You might not even have the luxury of sending out a technician to replace a battery. Instead, our system is maintenance-free. It harvests energy and operates itself,” Monagle adds.

    To avoid using a battery, they incorporate internal energy storage that can include a series of capacitors. Simpler than a battery, a capacitor stores energy in the electrical field between conductive plates. Capacitors can be made from a variety of materials, and their capabilities can be tuned to a range of operating conditions, safety requirements, and available space.

    The team carefully designed the capacitors so they are big enough to store the energy the device needs to turn on and start harvesting power, but small enough that the charge-up phase doesn’t take too long.

    In addition, since a sensor might go weeks or even months before turning on to take a measurement, they ensured the capacitors can hold enough energy even if some leaks out over time.

    Finally, they developed a series of control algorithms that dynamically measure and budget the energy collected, stored, and used by the device. A microcontroller, the “brain” of the energy management interface, constantly checks how much energy is stored and infers whether to turn the sensor on or off, take a measurement, or kick the harvester into a higher gear so it can gather more energy for more complex sensing needs.

    “Just like when you change gears on a bike, the energy management interface looks at how the harvester is doing, essentially seeing whether it is pedaling too hard or too soft, and then it varies the electronic load so it can maximize the amount of power it is harvesting and match the harvest to the needs of the sensor,” Monagle explains.

    Self-powered sensor

    Using this design framework, they built an energy management circuit for an off-the-shelf temperature sensor. The device harvests magnetic field energy and uses it to continually sample temperature data, which it sends to a smartphone interface using Bluetooth.

    The researchers used super-low-power circuits to design the device, but quickly found that these circuits have tight restrictions on how much voltage they can withstand before breaking down. Harvesting too much power could cause the device to explode.

    To avoid that, their energy harvester operating system in the microcontroller automatically adjusts or reduces the harvest if the amount of stored energy becomes excessive.

    They also found that communication — transmitting data gathered by the temperature sensor — was by far the most power-hungry operation.

    “Ensuring the sensor has enough stored energy to transmit data is a constant challenge that involves careful design,” Monagle says.

    In the future, the researchers plan to explore less energy-intensive means of transmitting data, such as using optics or acoustics. They also want to more rigorously model and predict how much energy might be coming into a system, or how much energy a sensor might need to take measurements, so a device could effectively gather even more data.

    “If you only make the measurements you think you need, you may miss something really valuable. With more information, you might be able to learn something you didn’t expect about a device’s operations. Our framework lets you balance those considerations,” Leeb says.  

    “This paper is well-documented regarding what a practical self-powered sensor node should internally entail for realistic scenarios. The overall design guidelines, particularly on the cold-start issue, are very helpful,” says Jinyeong Moon, an assistant professor of electrical and computer engineering at Florida State University College of Engineering who was not involved with this work. “Engineers planning to design a self-powering module for a wireless sensor node will greatly benefit from these guidelines, easily ticking off traditionally cumbersome cold-start-related checklists.”

    The work is supported, in part, by the Office of Naval Research and The Grainger Foundation. More

  • in

    3 Questions: Renaud Fournier on transforming MIT’s digital landscape

    Renaud Fournier SM ’95 joined the Institute in September 2023 in the newly established role of chief officer for business and digital transformation and is leading a team focused on simplifying business operations and systems for the MIT community. Fournier has extensive experience implementing systems and solving data challenges, both in higher education and the private sector — most recently, leading the digital transformation effort at New York University. Here, Fournier speaks about how he and his team will work closely with members of the MIT community to chart a course for MIT’s digital evolution.

    Q: What are MIT’s enterprise systems and how are they challenging for our community?

    A: The MIT community relies on our enterprise systems for a range of activities — everything from hiring and evaluating employees to managing research grants and facilities projects to maintaining student information. SAP is our current enterprise resource planning system for human resources, finance, and facilities management, and it’s integrated with other systems that provide additional business functionality. Some of these systems are purchased, like Coupa, while others are partially or fully homegrown, like Kuali Coeus and NIMBUS. Along with SAP, our other core systems — for example, Advance and MITSIS — feed data into a central data warehouse to support reporting.

    MIT’s enterprise systems and data landscape has evolved organically over 30 years. The Institute has become considerably more complicated since then, and they no longer represent the best practices or technology in the IT market.

    Q: What digital transformation projects are you most focused on?

    A: Our primary goal is to free up our community’s time so that they can achieve their greatest impact. The vision is to create easy-to-use and well-integrated systems, along with comprehensible and accessible data for reporting and analysis. To accomplish this, we will be taking a series of actions. These include modernizing our enterprise systems and data architecture to take advantage of better technology and functionality, within a cohesive and well-integrated landscape, and simplifying our business processes. To make our data accessible and actionable, we will implement more robust data governance, assigning clear ownership and accountability. And we will offer IT support that enables our community to accomplish its objectives. We need to address systems, processes, data, and support holistically, while engaging and assisting our community every step of the way.    

    Q: What are your next steps?

    A: Over the next few months, I will be building a team to guide the community on this journey, in partnership with IS&T [Information Systems and Technology], other central units, and our academic areas. Together, we will be developing a thoughtful and actionable multi-year roadmap of digital transformation projects, which will help us to produce a steady stream of improvements for our community. We have not selected any systems yet or determined the order in which they will be implemented. Engagement with stakeholders from central, academic, and research areas will inform how we prioritize projects over the next few years. Once we have created the roadmap to guide us, we look forward to the next phase — getting started on the work itself. More

  • in

    Bridging the gap between preschool policy, practice, and research

    Preschool in the United States has grown dramatically in the past several decades. From 1970 to 2018, preschool enrollment increased from 38 percent to 64 percent of eligible students. Fourteen states are currently discussing preschool expansion, with seven likely to pass some form of universal eligibility within the next calendar year. Amid this expansion, families, policymakers, and practitioners want to better understand preschools’ impacts and the factors driving preschool quality. 

    To address these and other questions, MIT Blueprint Labs recently held a Preschool Research Convening that brought researchers, funders, practitioners, and policymakers to Nashville, Tennessee, to discuss the future of preschool research. Parag Pathak, the Class of 1922 Professor of Economics at MIT and a Blueprint Labs co-founder and director, opened by sharing the goals of the convening: “Our goals for the next two days are to identify pressing, unanswered research questions and connect researchers, practitioners, policymakers, and funders. We also hope to craft a compelling research agenda.”

    Pathak added, “Given preschool expansion nationwide, we believe now is the moment to centralize our efforts and create knowledge to inform pressing decisions. We aim to generate rigorous preschool research that will lead to higher-quality and more equitable preschool.”

    Over 75 participants hailing from universities, early childhood education organizations, school districts, state education departments, and national policy organizations attended the convening, held Nov. 13-14. Through panels, presentations, and conversations, participants discussed essential subjects in the preschool space, built the foundations for valuable partnerships, and formed an actionable and inclusive research agenda.

    Research presented

    Among research works presented was a recent paper by Blueprint Labs affiliate Jesse Bruhn, an assistant professor of economics at Brown University and co-author Emily Emick, also of Brown, reviewing the state of lottery-based preschool research. They found that random evaluations from the past 60 years demonstrate that preschool improves children’s short-run academic outcomes, but those effects fade over time. However, positive impacts re-emerge in the long term through improved outcomes like high school graduation and college enrollment. Limited rigorous research studies children’s behavioral outcomes or the factors that lead to high-quality preschool, though trends from preliminary research suggest that full-day programs, language immersion programs, and specific curricula may benefit children.  

    An earlier Blueprint Labs study that was also presented at the convening is the only recent lottery-based study to provide insight on preschool’s long-term impacts. The work, conducted by Pathak and two others, reveals that enrolling in Boston Public Schools’ universal preschool program boosts children’s likelihood of graduating high school and enrolling in college. Yet, the preschool program had little detectable impact on elementary, middle, and high school state standardized test scores. Students who attended Boston preschool were less likely to be suspended or incarcerated in high school. However, research on preschool’s impacts on behavioral outcomes is limited; it remains an important area for further study. Future work could also fill in other gaps in research, such as access, alternative measures of student success, and variation across geographic contexts and student populations.

    More data sought

    State policy leaders also spoke at the event, including Lisa Roy, executive director of the Colorado Department of Early Childhood, and Sarah Neville-Morgan, deputy superintendent in the Opportunities for All Branch at the California Department of Education. Local practitioners, such as Elsa Holguín, president and CEO of the Denver Preschool Program, and Kristin Spanos, CEO of First 5 Alameda County, as well as national policy leaders including Lauren Hogan, managing director of policy and professional advancement at the National Association for the Education of Young Children, also shared their perspectives. 

    In panel discussions held throughout the kickoff, practitioners, policymakers, and researchers shared their perspectives on pressing questions for future research, including: What practices define high-quality preschool? How does preschool affect family systems and the workforce? How can we expand measures of effectiveness to move beyond traditional assessments? What can we learn from preschool’s differential impacts across time, settings, models, and geographies?

    Panelists also discussed the need for reliable data, sharing that “the absence of data allows the status quo to persist.” Several sessions focused on involving diverse stakeholders in the research process, highlighting the need for transparency, sensitivity to community contexts, and accessible communication about research findings.

    On the second day of the Preschool Research Convening, Pathak shared with attendees, “One of our goals… is to forge connections between all of you in this room and support new partnerships between researchers and practitioners. We hope your conversations are the launching pad for future collaborations.” Jason Sachs, the deputy director of early learning at the Bill and Melinda Gates Foundation and former director of early childhood at Boston Public Schools, provided closing remarks.

    The convening laid the groundwork for a research agenda and new research partnerships that can help answer questions about what works, in what context, for which kids, and under which conditions. Answers to these questions will be fundamental to ensure preschool expands in the most evidence-informed and equitable way possible.

    With this goal in mind, Blueprint Labs aims to create a new Preschool Research Collaborative to equip practitioners, policymakers, funders, and researchers with rigorous, actionable evidence on preschool performance. Pathak states, “We hope this collaborative will foster evidence-based decision-making that improves children’s short- and long-term outcomes.” The connections and research agenda formed at the Preschool Research Convening are the first steps toward achieving that goal. More

  • in

    Multiple AI models help robots execute complex plans more transparently

    Your daily to-do list is likely pretty straightforward: wash the dishes, buy groceries, and other minutiae. It’s unlikely you wrote out “pick up the first dirty dish,” or “wash that plate with a sponge,” because each of these miniature steps within the chore feels intuitive. While we can routinely complete each step without much thought, a robot requires a complex plan that involves more detailed outlines.

    MIT’s Improbable AI Lab, a group within the Computer Science and Artificial Intelligence Laboratory (CSAIL), has offered these machines a helping hand with a new multimodal framework: Compositional Foundation Models for Hierarchical Planning (HiP), which develops detailed, feasible plans with the expertise of three different foundation models. Like OpenAI’s GPT-4, the foundation model that ChatGPT and Bing Chat were built upon, these foundation models are trained on massive quantities of data for applications like generating images, translating text, and robotics.Unlike RT2 and other multimodal models that are trained on paired vision, language, and action data, HiP uses three different foundation models each trained on different data modalities. Each foundation model captures a different part of the decision-making process and then works together when it’s time to make decisions. HiP removes the need for access to paired vision, language, and action data, which is difficult to obtain. HiP also makes the reasoning process more transparent.

    What’s considered a daily chore for a human can be a robot’s “long-horizon goal” — an overarching objective that involves completing many smaller steps first — requiring sufficient data to plan, understand, and execute objectives. While computer vision researchers have attempted to build monolithic foundation models for this problem, pairing language, visual, and action data is expensive. Instead, HiP represents a different, multimodal recipe: a trio that cheaply incorporates linguistic, physical, and environmental intelligence into a robot.

    “Foundation models do not have to be monolithic,” says NVIDIA AI researcher Jim Fan, who was not involved in the paper. “This work decomposes the complex task of embodied agent planning into three constituent models: a language reasoner, a visual world model, and an action planner. It makes a difficult decision-making problem more tractable and transparent.”The team believes that their system could help these machines accomplish household chores, such as putting away a book or placing a bowl in the dishwasher. Additionally, HiP could assist with multistep construction and manufacturing tasks, like stacking and placing different materials in specific sequences.Evaluating HiP

    The CSAIL team tested HiP’s acuity on three manipulation tasks, outperforming comparable frameworks. The system reasoned by developing intelligent plans that adapt to new information.

    First, the researchers requested that it stack different-colored blocks on each other and then place others nearby. The catch: Some of the correct colors weren’t present, so the robot had to place white blocks in a color bowl to paint them. HiP often adjusted to these changes accurately, especially compared to state-of-the-art task planning systems like Transformer BC and Action Diffuser, by adjusting its plans to stack and place each square as needed.

    Another test: arranging objects such as candy and a hammer in a brown box while ignoring other items. Some of the objects it needed to move were dirty, so HiP adjusted its plans to place them in a cleaning box, and then into the brown container. In a third demonstration, the bot was able to ignore unnecessary objects to complete kitchen sub-goals such as opening a microwave, clearing a kettle out of the way, and turning on a light. Some of the prompted steps had already been completed, so the robot adapted by skipping those directions.

    A three-pronged hierarchy

    HiP’s three-pronged planning process operates as a hierarchy, with the ability to pre-train each of its components on different sets of data, including information outside of robotics. At the bottom of that order is a large language model (LLM), which starts to ideate by capturing all the symbolic information needed and developing an abstract task plan. Applying the common sense knowledge it finds on the internet, the model breaks its objective into sub-goals. For example, “making a cup of tea” turns into “filling a pot with water,” “boiling the pot,” and the subsequent actions required.

    “All we want to do is take existing pre-trained models and have them successfully interface with each other,” says Anurag Ajay, a PhD student in the MIT Department of Electrical Engineering and Computer Science (EECS) and a CSAIL affiliate. “Instead of pushing for one model to do everything, we combine multiple ones that leverage different modalities of internet data. When used in tandem, they help with robotic decision-making and can potentially aid with tasks in homes, factories, and construction sites.”

    These models also need some form of “eyes” to understand the environment they’re operating in and correctly execute each sub-goal. The team used a large video diffusion model to augment the initial planning completed by the LLM, which collects geometric and physical information about the world from footage on the internet. In turn, the video model generates an observation trajectory plan, refining the LLM’s outline to incorporate new physical knowledge.This process, known as iterative refinement, allows HiP to reason about its ideas, taking in feedback at each stage to generate a more practical outline. The flow of feedback is similar to writing an article, where an author may send their draft to an editor, and with those revisions incorporated in, the publisher reviews for any last changes and finalizes.

    In this case, the top of the hierarchy is an egocentric action model, or a sequence of first-person images that infer which actions should take place based on its surroundings. During this stage, the observation plan from the video model is mapped over the space visible to the robot, helping the machine decide how to execute each task within the long-horizon goal. If a robot uses HiP to make tea, this means it will have mapped out exactly where the pot, sink, and other key visual elements are, and begin completing each sub-goal.Still, the multimodal work is limited by the lack of high-quality video foundation models. Once available, they could interface with HiP’s small-scale video models to further enhance visual sequence prediction and robot action generation. A higher-quality version would also reduce the current data requirements of the video models.That being said, the CSAIL team’s approach only used a tiny bit of data overall. Moreover, HiP was cheap to train and demonstrated the potential of using readily available foundation models to complete long-horizon tasks. “What Anurag has demonstrated is proof-of-concept of how we can take models trained on separate tasks and data modalities and combine them into models for robotic planning. In the future, HiP could be augmented with pre-trained models that can process touch and sound to make better plans,” says senior author Pulkit Agrawal, MIT assistant professor in EECS and director of the Improbable AI Lab. The group is also considering applying HiP to solving real-world long-horizon tasks in robotics.Ajay and Agrawal are lead authors on a paper describing the work. They are joined by MIT professors and CSAIL principal investigators Tommi Jaakkola, Joshua Tenenbaum, and Leslie Pack Kaelbling; CSAIL research affiliate and MIT-IBM AI Lab research manager Akash Srivastava; graduate students Seungwook Han and Yilun Du ’19; former postdoc Abhishek Gupta, who is now assistant professor at University of Washington; and former graduate student Shuang Li PhD ’23.

    The team’s work was supported, in part, by the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, the U.S. Army Research Office, the U.S. Office of Naval Research Multidisciplinary University Research Initiatives, and the MIT-IBM Watson AI Lab. Their findings were presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS). More

  • in

    Co-creating climate futures with real-time data and spatial storytelling

    Virtual story worlds and game engines aren’t just for video games anymore. They are now tools for scientists and storytellers to digitally twin existing physical spaces and then turn them into vessels to dream up speculative climate stories and build collective designs of the future. That’s the theory and practice behind the MIT WORLDING initiative.

    Twice this year, WORLDING matched world-class climate story teams working in XR (extended reality) with relevant labs and researchers across MIT. One global group returned for a virtual gathering online in partnership with Unity for Humanity, while another met for one weekend in person, hosted at the MIT Media Lab.

    “We are witnessing the birth of an emergent field that fuses climate science, urban planning, real-time 3D engines, nonfiction storytelling, and speculative fiction, and it is all fueled by the urgency of the climate crises,” says Katerina Cizek, lead designer of the WORLDING initiative at the Co-Creation Studio of MIT Open Documentary Lab. “Interdisciplinary teams are forming and blossoming around the planet to collectively imagine and tell stories of healthy, livable worlds in virtual 3D spaces and then finding direct ways to translate that back to earth, literally.”

    At this year’s virtual version of WORLDING, five multidisciplinary teams were selected from an open call. In a week-long series of research and development gatherings, the teams met with MIT scientists, staff, fellows, students, and graduates, as well as other leading figures in the field. Guests ranged from curators at film festivals such as Sundance and Venice, climate policy specialists, and award-winning media creators to software engineers and renowned Earth and atmosphere scientists. The teams heard from MIT scholars in diverse domains, including geomorphology, urban planning as acts of democracy, and climate researchers at MIT Media Lab.

    Mapping climate data

    “We are measuring the Earth’s environment in increasingly data-driven ways. Hundreds of terabytes of data are taken every day about our planet in order to study the Earth as a holistic system, so we can address key questions about global climate change,” explains Rachel Connolly, an MIT Media Lab research scientist focused in the “Future Worlds” research theme, in a talk to the group. “Why is this important for your work and storytelling in general? Having the capacity to understand and leverage this data is critical for those who wish to design for and successfully operate in the dynamic Earth environment.”

    Making sense of billions of data points was a key theme during this year’s sessions. In another talk, Taylor Perron, an MIT professor of Earth, atmospheric and planetary sciences, shared how his team uses computational modeling combined with many other scientific processes to better understand how geology, climate, and life intertwine to shape the surfaces of Earth and other planets. His work resonated with one WORLDING team in particular, one aiming to digitally reconstruct the pre-Hispanic Lake Texcoco — where current day Mexico City is now situated — as a way to contrast and examine the region’s current water crisis.

    Democratizing the future

    While WORLDING approaches rely on rigorous science and the interrogation of large datasets, they are also founded on democratizing community-led approaches.

    MIT Department of Urban Studies and Planning graduate Lafayette Cruise MCP ’19 met with the teams to discuss how he moved his own practice as a trained urban planner to include a futurist component involving participatory methods. “I felt we were asking the same limited questions in regards to the future we were wanting to produce. We’re very limited, very constrained, as to whose values and comforts are being centered. There are so many possibilities for how the future could be.”

    Scaling to reach billions

    This work scales from the very local to massive global populations. Climate policymakers are concerned with reaching billions of people in the line of fire. “We have a goal to reach 1 billion people with climate resilience solutions,” says Nidhi Upadhyaya, deputy director at Atlantic Council’s Adrienne Arsht-Rockefeller Foundation Resilience Center. To get that reach, Upadhyaya is turning to games. “There are 3.3 billion-plus people playing video games across the world. Half of these players are women. This industry is worth $300 billion. Africa is currently among the fastest-growing gaming markets in the world, and 55 percent of the global players are in the Asia Pacific region.” She reminded the group that this conversation is about policy and how formats of mass communication can be used for policymaking, bringing about change, changing behavior, and creating empathy within audiences.

    Socially engaged game development is also connected to education at Unity Technologies, a game engine company. “We brought together our education and social impact work because we really see it as a critical flywheel for our business,” said Jessica Lindl, vice president and global head of social impact/education at Unity Technologies, in the opening talk of WORLDING. “We upscale about 900,000 students, in university and high school programs around the world, and about 800,000 adults who are actively learning and reskilling and upskilling in Unity. Ultimately resulting in our mission of the ‘world is a better place with more creators in it,’ millions of creators who reach billions of consumers — telling the world stories, and fostering a more inclusive, sustainable, and equitable world.”

    Access to these technologies is key, especially the hardware. “Accessibility has been missing in XR,” explains Reginé Gilbert, who studies and teaches accessibility and disability in user experience design at New York University. “XR is being used in artificial intelligence, assistive technology, business, retail, communications, education, empathy, entertainment, recreation, events, gaming, health, rehabilitation meetings, navigation, therapy, training, video programming, virtual assistance wayfinding, and so many other uses. This is a fun fact for folks: 97.8 percent of the world hasn’t tried VR [virtual reality] yet, actually.”

    Meanwhile, new hardware is on its way. The WORLDING group got early insights into the highly anticipated Apple Vision Pro headset, which promises to integrate many forms of XR and personal computing in one device. “They’re really pushing this kind of pass-through or mixed reality,” said Dan Miller, a Unity engineer on the poly spatial team, collaborating with Apple, who described the experience of the device as “You are viewing the real world. You’re pulling up windows, you’re interacting with content. It’s a kind of spatial computing device where you have multiple apps open, whether it’s your email client next to your messaging client with a 3D game in the middle. You’re interacting with all these things in the same space and at different times.”

    “WORLDING combines our passion for social-impact storytelling and incredible innovative storytelling,” said Paisley Smith of the Unity for Humanity Program at Unity Technologies. She added, “This is an opportunity for creators to incubate their game-changing projects and connect with experts across climate, story, and technology.”

    Meeting at MIT

    In a new in-person iteration of WORLDING this year, organizers collaborated closely with Connolly at the MIT Media Lab to co-design an in-person weekend conference Oct. 25 – Nov. 7 with 45 scholars and professionals who visualize climate data at NASA, the National Oceanic and Atmospheric Administration, planetariums, and museums across the United States.

    A participant said of the event, “An incredible workshop that had had a profound effect on my understanding of climate data storytelling and how to combine different components together for a more [holistic] solution.”

    “With this gathering under our new Future Worlds banner,” says Dava Newman, director of the MIT Media Lab and Apollo Program Professor of Astronautics chair, “the Media Lab seeks to affect human behavior and help societies everywhere to improve life here on Earth and in worlds beyond, so that all — the sentient, natural, and cosmic — worlds may flourish.” 

    “WORLDING’s virtual-only component has been our biggest strength because it has enabled a true, international cohort to gather, build, and create together. But this year, an in-person version showed broader opportunities that spatial interactivity generates — informal Q&As, physical worksheets, and larger-scale ideation, all leading to deeper trust-building,” says WORLDING producer Srushti Kamat SM ’23.

    The future and potential of WORLDING lies in the ongoing dialogue between the virtual and physical, both in the work itself and in the format of the workshops. More

  • in

    Technique could efficiently solve partial differential equations for numerous applications

    In fields such as physics and engineering, partial differential equations (PDEs) are used to model complex physical processes to generate insight into how some of the most complicated physical and natural systems in the world function.

    To solve these difficult equations, researchers use high-fidelity numerical solvers, which can be very time-consuming and computationally expensive to run. The current simplified alternative, data-driven surrogate models, compute the goal property of a solution to PDEs rather than the whole solution. Those are trained on a set of data that has been generated by the high-fidelity solver, to predict the output of the PDEs for new inputs. This is data-intensive and expensive because complex physical systems require a large number of simulations to generate enough data. 

    In a new paper, “Physics-enhanced deep surrogates for partial differential equations,” published in December in Nature Machine Intelligence, a new method is proposed for developing data-driven surrogate models for complex physical systems in such fields as mechanics, optics, thermal transport, fluid dynamics, physical chemistry, and climate models.

    The paper was authored by MIT’s professor of applied mathematics Steven G. Johnson along with Payel Das and Youssef Mroueh of the MIT-IBM Watson AI Lab and IBM Research; Chris Rackauckas of Julia Lab; and Raphaël Pestourie, a former MIT postdoc who is now at Georgia Tech. The authors call their method “physics-enhanced deep surrogate” (PEDS), which combines a low-fidelity, explainable physics simulator with a neural network generator. The neural network generator is trained end-to-end to match the output of the high-fidelity numerical solver.

    “My aspiration is to replace the inefficient process of trial and error with systematic, computer-aided simulation and optimization,” says Pestourie. “Recent breakthroughs in AI like the large language model of ChatGPT rely on hundreds of billions of parameters and require vast amounts of resources to train and evaluate. In contrast, PEDS is affordable to all because it is incredibly efficient in computing resources and has a very low barrier in terms of infrastructure needed to use it.”

    In the article, they show that PEDS surrogates can be up to three times more accurate than an ensemble of feedforward neural networks with limited data (approximately 1,000 training points), and reduce the training data needed by at least a factor of 100 to achieve a target error of 5 percent. Developed using the MIT-designed Julia programming language, this scientific machine-learning method is thus efficient in both computing and data.

    The authors also report that PEDS provides a general, data-driven strategy to bridge the gap between a vast array of simplified physical models with corresponding brute-force numerical solvers modeling complex systems. This technique offers accuracy, speed, data efficiency, and physical insights into the process.

    Says Pestourie, “Since the 2000s, as computing capabilities improved, the trend of scientific models has been to increase the number of parameters to fit the data better, sometimes at the cost of a lower predictive accuracy. PEDS does the opposite by choosing its parameters smartly. It leverages the technology of automatic differentiation to train a neural network that makes a model with few parameters accurate.”

    “The main challenge that prevents surrogate models from being used more widely in engineering is the curse of dimensionality — the fact that the needed data to train a model increases exponentially with the number of model variables,” says Pestourie. “PEDS reduces this curse by incorporating information from the data and from the field knowledge in the form of a low-fidelity model solver.”

    The researchers say that PEDS has the potential to revive a whole body of the pre-2000 literature dedicated to minimal models — intuitive models that PEDS could make more accurate while also being predictive for surrogate model applications.

    “The application of the PEDS framework is beyond what we showed in this study,” says Das. “Complex physical systems governed by PDEs are ubiquitous, from climate modeling to seismic modeling and beyond. Our physics-inspired fast and explainable surrogate models will be of great use in those applications, and play a complementary role to other emerging techniques, like foundation models.”

    The research was supported by the MIT-IBM Watson AI Lab and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies.  More