More stories

  • in

    Improving US air quality, equitably

    Decarbonization of national economies will be key to achieving global net-zero emissions by 2050, a major stepping stone to the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius (and ideally 1.5 C), and thereby averting the worst consequences of climate change. Toward that end, the United States has pledged to reduce its greenhouse gas emissions by 50-52 percent from 2005 levels by 2030, backed by its implementation of the 2022 Inflation Reduction Act. This strategy is consistent with a 50-percent reduction in carbon dioxide (CO2) by the end of the decade.

    If U.S. federal carbon policy is successful, the nation’s overall air quality will also improve. Cutting CO2 emissions reduces atmospheric concentrations of air pollutants that lead to the formation of fine particulate matter (PM2.5), which causes more than 200,000 premature deaths in the United States each year. But an average nationwide improvement in air quality will not be felt equally; air pollution exposure disproportionately harms people of color and lower-income populations.

    How effective are current federal decarbonization policies in reducing U.S. racial and economic disparities in PM2.5 exposure, and what changes will be needed to improve their performance? To answer that question, researchers at MIT and Stanford University recently evaluated a range of policies which, like current U.S. federal carbon policies, reduce economy-wide CO2 emissions by 40-60 percent from 2005 levels by 2030. Their findings appear in an open-access article in the journal Nature Communications.

    First, they show that a carbon-pricing policy, while effective in reducing PM2.5 exposure for all racial/ethnic groups, does not significantly mitigate relative disparities in exposure. On average, the white population undergoes far less exposure than Black, Hispanic, and Asian populations. This policy does little to reduce exposure disparities because the CO2 emissions reductions that it achieves primarily occur in the coal-fired electricity sector. Other sectors, such as industry and heavy-duty diesel transportation, contribute far more PM2.5-related emissions.

    The researchers then examine thousands of different reduction options through an optimization approach to identify whether any possible combination of carbon dioxide reductions in the range of 40-60 percent can mitigate disparities. They find that that no policy scenario aligned with current U.S. carbon dioxide emissions targets is likely to significantly reduce current PM2.5 exposure disparities.

    “Policies that address only about 50 percent of CO2 emissions leave many polluting sources in place, and those that prioritize reductions for minorities tend to benefit the entire population,” says Noelle Selin, supervising author of the study and a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences. “This means that a large range of policies that reduce CO2 can improve air quality overall, but can’t address long-standing inequities in air pollution exposure.”

    So if climate policy alone cannot adequately achieve equitable air quality results, what viable options remain? The researchers suggest that more ambitious carbon policies could narrow racial and economic PM2.5 exposure disparities in the long term, but not within the next decade. To make a near-term difference, they recommend interventions designed to reduce PM2.5 emissions resulting from non-CO2 sources, ideally at the economic sector or community level.

    “Achieving improved PM2.5 exposure for populations that are disproportionately exposed across the United States will require thinking that goes beyond current CO2 policy strategies, most likely involving large-scale structural changes,” says Selin. “This could involve changes in local and regional transportation and housing planning, together with accelerated efforts towards decarbonization.” More

  • in

    From physics to generative AI: An AI model for advanced pattern generation

    Generative AI, which is currently riding a crest of popular discourse, promises a world where the simple transforms into the complex — where a simple distribution evolves into intricate patterns of images, sounds, or text, rendering the artificial startlingly real. 

    The realms of imagination no longer remain as mere abstractions, as researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have brought an innovative AI model to life. Their new technology integrates two seemingly unrelated physical laws that underpin the best-performing generative models to date: diffusion, which typically illustrates the random motion of elements, like heat permeating a room or a gas expanding into space, and Poisson Flow, which draws on the principles governing the activity of electric charges.

    This harmonious blend has resulted in superior performance in generating new images, outpacing existing state-of-the-art models. Since its inception, the “Poisson Flow Generative Model ++” (PFGM++) has found potential applications in various fields, from antibody and RNA sequence generation to audio production and graph generation.

    The model can generate complex patterns, like creating realistic images or mimicking real-world processes. PFGM++ builds off of PFGM, the team’s work from the prior year. PFGM takes inspiration from the means behind the mathematical equation known as the “Poisson” equation, and then applies it to the data the model tries to learn from. To do this, the team used a clever trick: They added an extra dimension to their model’s “space,” kind of like going from a 2D sketch to a 3D model. This extra dimension gives more room for maneuvering, places the data in a larger context, and helps one approach the data from all directions when generating new samples. 

    “PFGM++ is an example of the kinds of AI advances that can be driven through interdisciplinary collaborations between physicists and computer scientists,” says Jesse Thaler, theoretical particle physicist in MIT’s Laboratory for Nuclear Science’s Center for Theoretical Physics and director of the National Science Foundation’s AI Institute for Artificial Intelligence and Fundamental Interactions (NSF AI IAIFI), who was not involved in the work. “In recent years, AI-based generative models have yielded numerous eye-popping results, from photorealistic images to lucid streams of text. Remarkably, some of the most powerful generative models are grounded in time-tested concepts from physics, such as symmetries and thermodynamics. PFGM++ takes a century-old idea from fundamental physics — that there might be extra dimensions of space-time — and turns it into a powerful and robust tool to generate synthetic but realistic datasets. I’m thrilled to see the myriad of ways ‘physics intelligence’ is transforming the field of artificial intelligence.”

    The underlying mechanism of PFGM isn’t as complex as it might sound. The researchers compared the data points to tiny electric charges placed on a flat plane in a dimensionally expanded world. These charges produce an “electric field,” with the charges looking to move upwards along the field lines into an extra dimension and consequently forming a uniform distribution on a vast imaginary hemisphere. The generation process is like rewinding a videotape: starting with a uniformly distributed set of charges on the hemisphere and tracking their journey back to the flat plane along the electric lines, they align to match the original data distribution. This intriguing process allows the neural model to learn the electric field, and generate new data that mirrors the original. 

    The PFGM++ model extends the electric field in PFGM to an intricate, higher-dimensional framework. When you keep expanding these dimensions, something unexpected happens — the model starts resembling another important class of models, the diffusion models. This work is all about finding the right balance. The PFGM and diffusion models sit at opposite ends of a spectrum: one is robust but complex to handle, the other simpler but less sturdy. The PFGM++ model offers a sweet spot, striking a balance between robustness and ease of use. This innovation paves the way for more efficient image and pattern generation, marking a significant step forward in technology. Along with adjustable dimensions, the researchers proposed a new training method that enables more efficient learning of the electric field. 

    To bring this theory to life, the team resolved a pair of differential equations detailing these charges’ motion within the electric field. They evaluated the performance using the Frechet Inception Distance (FID) score, a widely accepted metric that assesses the quality of images generated by the model in comparison to the real ones. PFGM++ further showcases a higher resistance to errors and robustness toward the step size in the differential equations.

    Looking ahead, they aim to refine certain aspects of the model, particularly in systematic ways to identify the “sweet spot” value of D tailored for specific data, architectures, and tasks by analyzing the behavior of estimation errors of neural networks. They also plan to apply the PFGM++ to the modern large-scale text-to-image/text-to-video generation.

    “Diffusion models have become a critical driving force behind the revolution in generative AI,” says Yang Song, research scientist at OpenAI. “PFGM++ presents a powerful generalization of diffusion models, allowing users to generate higher-quality images by improving the robustness of image generation against perturbations and learning errors. Furthermore, PFGM++ uncovers a surprising connection between electrostatics and diffusion models, providing new theoretical insights into diffusion model research.”

    “Poisson Flow Generative Models do not only rely on an elegant physics-inspired formulation based on electrostatics, but they also offer state-of-the-art generative modeling performance in practice,” says NVIDIA Senior Research Scientist Karsten Kreis, who was not involved in the work. “They even outperform the popular diffusion models, which currently dominate the literature. This makes them a very powerful generative modeling tool, and I envision their application in diverse areas, ranging from digital content creation to generative drug discovery. More generally, I believe that the exploration of further physics-inspired generative modeling frameworks holds great promise for the future and that Poisson Flow Generative Models are only the beginning.”

    Authors on a paper about this work include three MIT graduate students: Yilun Xu of the Department of Electrical Engineering and Computer Science (EECS) and CSAIL, Ziming Liu of the Department of Physics and the NSF AI IAIFI, and Shangyuan Tong of EECS and CSAIL, as well as Google Senior Research Scientist Yonglong Tian PhD ’23. MIT professors Max Tegmark and Tommi Jaakkola advised the research.

    The team was supported by the MIT-DSTA Singapore collaboration, the MIT-IBM Grand Challenge project, National Science Foundation grants, The Casey and Family Foundation, the Foundational Questions Institute, the Rothberg Family Fund for Cognitive Science, and the ML for Pharmaceutical Discovery and Synthesis Consortium. Their work was presented at the International Conference on Machine Learning this summer. More

  • in

    3 Questions: A new PhD program from the Center for Computational Science and Engineering

    This fall, the Center for Computational Science and Engineering (CCSE), an academic unit in the MIT Schwarzman College of Computing, is introducing a new standalone PhD degree program that will enable students to pursue research in cross-cutting methodological aspects of computational science and engineering. The launch follows approval of the center’s degree program proposal at the May 2023 Institute faculty meeting.

    Doctoral-level graduate study in computational science and engineering (CSE) at MIT has, for the past decade, been offered through an interdisciplinary program in which CSE students are admitted to one of eight participating academic departments in the School of Engineering or School of Science. While this model adds a strong disciplinary component to students’ education, the rapid growth of the CSE field and the establishment of the MIT Schwarzman College of Computing have prompted an exciting expansion of MIT’s graduate-level offerings in computation.

    The new degree, offered by the college, will run alongside MIT’s existing interdisciplinary offerings in CSE, complementing these doctoral training programs and preparing students to contribute to the leading edge of the field. Here, CCSE co-directors Youssef Marzouk and Nicolas Hadjiconstantinou discuss the standalone program and how they expect it to elevate the visibility and impact of CSE research and education at MIT.

    Q: What is computational science and engineering?

    Marzouk: Computational science and engineering focuses on the development and analysis of state-of-the-art methods for computation and their innovative application to problems of science and engineering interest. It has intellectual foundations in applied mathematics, statistics, and computer science, and touches the full range of science and engineering disciplines. Yet, it synthesizes these foundations into a discipline of its own — one that links the digital and physical worlds. It’s an exciting and evolving multidisciplinary field.

    Hadjiconstantinou: Examples of CSE research happening at MIT include modeling and simulation techniques, the underlying computational mathematics, and data-driven modeling of physical systems. Computational statistics and scientific machine learning have become prominent threads within CSE, joining high-performance computing, mathematically-oriented programming languages, and their broader links to algorithms and software. Application domains include energy, environment and climate, materials, health, transportation, autonomy, and aerospace, among others. Some of our researchers focus on general and widely applicable methodology, while others choose to focus on methods and algorithms motivated by a specific domain of application.

    Q: What was the motivation behind creating a standalone PhD program?

    Marzouk: The new degree focuses on a particular class of students whose background and interests are primarily in CSE methodology, in a manner that cuts across the disciplinary research structure represented by our current “with-departments” degree program. There is a strong research demand for such methodologically-focused students among CCSE faculty and MIT faculty in general. Our objective is to create a targeted, coherent degree program in this field that, alongside our other thriving CSE offerings, will create the leading environment for top CSE students worldwide.

    Hadjiconstantinou: One of CCSE’s most important functions is to recruit exceptional students who are trained in and want to work in computational science and engineering. Experience with our CSE master’s program suggests that students with a strong background and interests in the discipline prefer to apply to a pure CSE program for their graduate studies. The standalone degree aims to bring these students to MIT and make them available to faculty across the Institute.

    Q: How will this impact computing education and research at MIT? 

    Hadjiconstantinou: We believe that offering a standalone PhD program in CSE alongside the existing “with-departments” programs will significantly strengthen MIT’s graduate programs in computing. In particular, it will strengthen the methodological core of CSE research and education at MIT, while continuing to support the disciplinary-flavored CSE work taking place in our participating departments, which include Aeronautics and Astronautics; Chemical Engineering; Civil and Environmental Engineering; Materials Science and Engineering; Mechanical Engineering; Nuclear Science and Engineering; Earth, Atmospheric and Planetary Sciences; and Mathematics. Together, these programs will create a stronger CSE student cohort and facilitate deeper exchanges between the college and other units at MIT.

    Marzouk: In a broader sense, the new program is designed to help realize one of the key opportunities presented by the college, which is to create a richer variety of graduate degrees in computation and to involve as many faculty and units in these educational endeavors as possible. The standalone CSE PhD will join other distinguished doctoral programs of the college — such as the Department of Electrical Engineering and Computer Science PhD; the Operations Research Center PhD; and the Interdisciplinary Doctoral Program in Statistics and the Social and Engineering Systems PhD within the Institute for Data, Systems, and Society — and grow in a way that is informed by them. The confluence of these academic programs, and natural synergies among them, will make MIT quite unique. More

  • in

    How an archeological approach can help leverage biased data in AI to improve medicine

    The classic computer science adage “garbage in, garbage out” lacks nuance when it comes to understanding biased medical data, argue computer science and bioethics professors from MIT, Johns Hopkins University, and the Alan Turing Institute in a new opinion piece published in a recent edition of the New England Journal of Medicine (NEJM). The rising popularity of artificial intelligence has brought increased scrutiny to the matter of biased AI models resulting in algorithmic discrimination, which the White House Office of Science and Technology identified as a key issue in their recent Blueprint for an AI Bill of Rights. 

    When encountering biased data, particularly for AI models used in medical settings, the typical response is to either collect more data from underrepresented groups or generate synthetic data making up for missing parts to ensure that the model performs equally well across an array of patient populations. But the authors argue that this technical approach should be augmented with a sociotechnical perspective that takes both historical and current social factors into account. By doing so, researchers can be more effective in addressing bias in public health. 

    “The three of us had been discussing the ways in which we often treat issues with data from a machine learning perspective as irritations that need to be managed with a technical solution,” recalls co-author Marzyeh Ghassemi, an assistant professor in electrical engineering and computer science and an affiliate of the Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Institute of Medical Engineering and Science (IMES). “We had used analogies of data as an artifact that gives a partial view of past practices, or a cracked mirror holding up a reflection. In both cases the information is perhaps not entirely accurate or favorable: Maybe we think that we behave in certain ways as a society — but when you actually look at the data, it tells a different story. We might not like what that story is, but once you unearth an understanding of the past you can move forward and take steps to address poor practices.” 

    Data as artifact 

    In the paper, titled “Considering Biased Data as Informative Artifacts in AI-Assisted Health Care,” Ghassemi, Kadija Ferryman, and Maxine Mackintosh make the case for viewing biased clinical data as “artifacts” in the same way anthropologists or archeologists would view physical objects: pieces of civilization-revealing practices, belief systems, and cultural values — in the case of the paper, specifically those that have led to existing inequities in the health care system. 

    For example, a 2019 study showed that an algorithm widely considered to be an industry standard used health-care expenditures as an indicator of need, leading to the erroneous conclusion that sicker Black patients require the same level of care as healthier white patients. What researchers found was algorithmic discrimination failing to account for unequal access to care.  

    In this instance, rather than viewing biased datasets or lack of data as problems that only require disposal or fixing, Ghassemi and her colleagues recommend the “artifacts” approach as a way to raise awareness around social and historical elements influencing how data are collected and alternative approaches to clinical AI development. 

    “If the goal of your model is deployment in a clinical setting, you should engage a bioethicist or a clinician with appropriate training reasonably early on in problem formulation,” says Ghassemi. “As computer scientists, we often don’t have a complete picture of the different social and historical factors that have gone into creating data that we’ll be using. We need expertise in discerning when models generalized from existing data may not work well for specific subgroups.” 

    When more data can actually harm performance 

    The authors acknowledge that one of the more challenging aspects of implementing an artifact-based approach is being able to assess whether data have been racially corrected: i.e., using white, male bodies as the conventional standard that other bodies are measured against. The opinion piece cites an example from the Chronic Kidney Disease Collaboration in 2021, which developed a new equation to measure kidney function because the old equation had previously been “corrected” under the blanket assumption that Black people have higher muscle mass. Ghassemi says that researchers should be prepared to investigate race-based correction as part of the research process. 

    In another recent paper accepted to this year’s International Conference on Machine Learning co-authored by Ghassemi’s PhD student Vinith Suriyakumar and University of California at San Diego Assistant Professor Berk Ustun, the researchers found that assuming the inclusion of personalized attributes like self-reported race improve the performance of ML models can actually lead to worse risk scores, models, and metrics for minority and minoritized populations.  

    “There’s no single right solution for whether or not to include self-reported race in a clinical risk score. Self-reported race is a social construct that is both a proxy for other information, and deeply proxied itself in other medical data. The solution needs to fit the evidence,” explains Ghassemi. 

    How to move forward 

    This is not to say that biased datasets should be enshrined, or biased algorithms don’t require fixing — quality training data is still key to developing safe, high-performance clinical AI models, and the NEJM piece highlights the role of the National Institutes of Health (NIH) in driving ethical practices.  

    “Generating high-quality, ethically sourced datasets is crucial for enabling the use of next-generation AI technologies that transform how we do research,” NIH acting director Lawrence Tabak stated in a press release when the NIH announced its $130 million Bridge2AI Program last year. Ghassemi agrees, pointing out that the NIH has “prioritized data collection in ethical ways that cover information we have not previously emphasized the value of in human health — such as environmental factors and social determinants. I’m very excited about their prioritization of, and strong investments towards, achieving meaningful health outcomes.” 

    Elaine Nsoesie, an associate professor at the Boston University of Public Health, believes there are many potential benefits to treating biased datasets as artifacts rather than garbage, starting with the focus on context. “Biases present in a dataset collected for lung cancer patients in a hospital in Uganda might be different from a dataset collected in the U.S. for the same patient population,” she explains. “In considering local context, we can train algorithms to better serve specific populations.” Nsoesie says that understanding the historical and contemporary factors shaping a dataset can make it easier to identify discriminatory practices that might be coded in algorithms or systems in ways that are not immediately obvious. She also notes that an artifact-based approach could lead to the development of new policies and structures ensuring that the root causes of bias in a particular dataset are eliminated. 

    “People often tell me that they are very afraid of AI, especially in health. They’ll say, ‘I’m really scared of an AI misdiagnosing me,’ or ‘I’m concerned it will treat me poorly,’” Ghassemi says. “I tell them, you shouldn’t be scared of some hypothetical AI in health tomorrow, you should be scared of what health is right now. If we take a narrow technical view of the data we extract from systems, we could naively replicate poor practices. That’s not the only option — realizing there is a problem is our first step towards a larger opportunity.”  More

  • in

    Helping computer vision and language models understand what they see

    Powerful machine-learning algorithms known as vision and language models, which learn to match text with images, have shown remarkable results when asked to generate captions or summarize videos.

    While these models excel at identifying objects, they often struggle to understand concepts, like object attributes or the arrangement of items in a scene. For instance, a vision and language model might recognize the cup and table in an image, but fail to grasp that the cup is sitting on the table.

    Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere have demonstrated a new technique that utilizes computer-generated data to help vision and language models overcome this shortcoming.

    The researchers created a synthetic dataset of images that depict a wide range of scenarios, object arrangements, and human actions, coupled with detailed text descriptions. They used this annotated dataset to “fix” vision and language models so they can learn concepts more effectively. Their technique ensures these models can still make accurate predictions when they see real images.

    When they tested models on concept understanding, the researchers found that their technique boosted accuracy by up to 10 percent. This could improve systems that automatically caption videos or enhance models that provide natural language answers to questions about images, with applications in fields like e-commerce or health care.

    “With this work, we are going beyond nouns in the sense that we are going beyond just the names of objects to more of the semantic concept of an object and everything around it. Our idea was that, when a machine-learning model sees objects in many different arrangements, it will have a better idea of how arrangement matters in a scene,” says Khaled Shehada, a graduate student in the Department of Electrical Engineering and Computer Science and co-author of a paper on this technique.

    Shehada wrote the paper with lead author Paola Cascante-Bonilla, a computer science graduate student at Rice University; Aude Oliva, director of strategic industry engagement at the MIT Schwarzman College of Computing, MIT director of the MIT-IBM Watson AI Lab, and a senior research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL); senior author Leonid Karlinsky, a research staff member in the MIT-IBM Watson AI Lab; and others at MIT, the MIT-IBM Watson AI Lab, Georgia Tech, Rice University, École des Ponts, Weizmann Institute of Science, and IBM Research. The paper will be presented at the International Conference on Computer Vision.

    Focusing on objects

    Vision and language models typically learn to identify objects in a scene, and can end up ignoring object attributes, such as color and size, or positional relationships, such as which object is on top of another object.

    This is due to the method with which these models are often trained, known as contrastive learning. This training method involves forcing a model to predict the correspondence between images and text. When comparing natural images, the objects in each scene tend to cause the most striking differences. (Perhaps one image shows a horse in a field while the second shows a sailboat on the water.)

    “Every image could be uniquely defined by the objects in the image. So, when you do contrastive learning, just focusing on the nouns and objects would solve the problem. Why would the model do anything differently?” says Karlinsky.

    The researchers sought to mitigate this problem by using synthetic data to fine-tune a vision and language model. The fine-tuning process involves tweaking a model that has already been trained to improve its performance on a specific task.

    They used a computer to automatically create synthetic videos with diverse 3D environments and objects, such as furniture and luggage, and added human avatars that interacted with the objects.

    Using individual frames of these videos, they generated nearly 800,000 photorealistic images, and then paired each with a detailed caption. The researchers developed a methodology for annotating every aspect of the image to capture object attributes, positional relationships, and human-object interactions clearly and consistently in dense captions.

    Because the researchers created the images, they could control the appearance and position of objects, as well as the gender, clothing, poses, and actions of the human avatars.

    “Synthetic data allows a lot of diversity. With real images, you might not have a lot of elephants in a room, but with synthetic data, you could actually have a pink elephant in a room with a human, if you want,” Cascante-Bonilla says.

    Synthetic data have other advantages, too. They are cheaper to generate than real data, yet the images are highly photorealistic. They also preserve privacy because no real humans are shown in the images. And, because data are produced automatically by a computer, they can be generated quickly in massive quantities.

    By using different camera viewpoints, or slightly changing the positions or attributes of objects, the researchers created a dataset with a far wider variety of scenarios than one would find in a natural dataset.

    Fine-tune, but don’t forget

    However, when one fine-tunes a model with synthetic data, there is a risk that model might “forget” what it learned when it was originally trained with real data.

    The researchers employed a few techniques to prevent this problem, such as adjusting the synthetic data so colors, lighting, and shadows more closely match those found in natural images. They also made adjustments to the model’s inner-workings after fine-tuning to further reduce any forgetfulness.

    Their synthetic dataset and fine-tuning strategy improved the ability of popular vision and language models to accurately recognize concepts by up to 10 percent. At the same time, the models did not forget what they had already learned.

    Now that they have shown how synthetic data can be used to solve this problem, the researchers want to identify ways to improve the visual quality and diversity of these data, as well as the underlying physics that makes synthetic scenes look realistic. In addition, they plan to test the limits of scalability, and investigate whether model improvement starts to plateau with larger and more diverse synthetic datasets.

    This research is funded, in part, by the U.S. Defense Advanced Research Projects Agency, the National Science Foundation, and the MIT-IBM Watson AI Lab. More

  • in

    Advancing social studies at MIT Sloan

    Around 2010, Facebook was a relatively small company with about 2,000 employees. So, when a PhD student named Dean Eckles showed up to serve an intership at the firm, he landed in a position with some real duties.

    Eckles essentially became the primary data scientist for the product manager who was overseeing the platform’s news feeds. That manager would pepper Eckles with questions. How exactly do people influence each other online? If Facebook tweaked its content-ranking algorithms, what would happen? What occurs when you show people more photos?

    As a doctoral candidate already studying social influence, Eckles was well-equipped to think about such questions, and being at Facebook gave him a lot of data to study them. 

    “If you show people more photos, they post more photos themselves,” Eckles says. “In turn, that affects the experience of all their friends. Plus they’re getting more likes and more comments. It affects everybody’s experience. But can you account for all of these compounding effects across the network?”

    Eckles, now an associate professor in the MIT Sloan School of Management and an affiliate faculty member of the Institute for Data, Systems, and Society, has made a career out of thinking carefully about that last question. Studying social networks allows Eckles to tackle significant questions involving, for example, the economic and political effects of social networks, the spread of misinformation, vaccine uptake during the Covid-19 crisis, and other aspects of the formation and shape of social networks. For instance, one study he co-authored this summer shows that people who either move between U.S. states, change high schools, or attend college out of state, wind up with more robust social networks, which are strongly associated with greater economic success.

    Eckles maintains another research channel focused on what scholars call “causal inference,” the methods and techniques that allow researchers to identify cause-and-effect connections in the world.

    “Learning about cause-and-effect relationships is core to so much science,” Eckles says. “In behavioral, social, economic, or biomedical science, it’s going to be hard. When you start thinking about humans, causality gets difficult. People do things strategically, and they’re electing into situations based on their own goals, so that complicates a lot of cause-and-effect relationships.”

    Eckles has now published dozens of papers in each of his different areas of work; for his research and teaching, Eckles received tenure from MIT last year.

    Five degrees and a job

    Eckles grew up in California, mostly near the Lake Tahoe area. He attended Stanford University as an undergraduate, arriving on campus in fall 2002 — and didn’t really leave for about a decade. Eckles has five degrees from Stanford. As an undergrad, he received a BA in philosophy and a BS in symbolic systems, an interdisciplinary major combining computer science, philosophy, psychology, and more. Eckles was set to attend Oxford University for graduate work in philosophy but changed his mind and stayed at Stanford for an MS in symbolic systems too. 

    “[Oxford] might have been a great experience, but I decided to focus more on the tech side of things,” he says.

    After receiving his first master’s degree, Eckles did take a year off from school and worked for Nokia, although the firm’s offices were adjacent to the Stanford campus and Eckles would sometimes stop and talk to faculty during the workday. Soon he was enrolled at Stanford again, this time earning his PhD in communication, in 2012, while receiving an MA in statistics the year before. His doctoral dissertation wound up being about peer influence in networks. PhD in hand, Eckles promptly headed back to Facebook, this time for three years as a full-time researcher.

     “They were really supportive of the work I was doing,” Eckles says.

    Still, Eckles remained interested in moving into academia, and joined the MIT faculty in 2017 with a position in MIT Sloan’s Marketing Group. The group consists of a set of scholars with far-ranging interests, from cognitive science to advertising to social network dynamics.

    “Our group reflects something deeper about the Sloan school and about MIT as well, an openness to doing things differently and not having to fit into narrowly defined tracks,” Eckles says.

    For that matter, MIT has many faculty in different domains who work on causal inference, and whose work Eckles quickly cites — including economists Victor Chernozhukov and Alberto Abadie, and Joshua Angrist, whose book “Mostly Harmless Econometrics” Eckles name-checks as an influence.

    “I’ve been fortunate in my career that causal inference turned out to be a hot area,” Eckles says. “But I think it’s hot for good reasons. People started to realize that, yes, causal inference is really important. There are economists, computer scientists, statisticians, and epidemiologists who are going to the same conferences and citing each other’s papers. There’s a lot happening.”

    How do networks form?

    These days, Eckles is interested in expanding the questions he works on. In the past, he has often studied existing social networks and looked at their effects. For instance: One study Eckles co-authored, examining the 2012 U.S. elections, found that get-out-the-vote messages work very well, especially when relayed via friends.

    That kind of study takes the existence of the network as a given, though. Another kind of research question is, as Eckles puts it, “How do social networks form and evolve? And what are the consequences of these network structures?” His recent study about social networks expanding as people move around and change schools is one example of research that digs into the core life experiences underlying social networks.

    “I’m excited about doing more on how these networks arise and what factors, including everything from personality to public transit, affect their formation,” Eckles says.

    Understanding more about how social networks form gets at key questions about social life and civic structure. Suppose research shows how some people develop and maintain beneficial connections in life; it’s possible that those insights could be applied to programs helping people in more disadvantaged situations realize some of the same opportunities.

    “We want to act on things,” Eckles says. “Sometimes people say, ‘We care about prediction.’ I would say, ‘We care about prediction under intervention.’ We want to predict what’s going to happen if we try different things.”

    Ultimately, Eckles reflects, “Trying to reason about the origins and maintenance of social networks, and the effects of networks, is interesting substantively and methodologically. Networks are super-high-dimensional objects, even just a single person’s network and all its connections. You have to summarize it, so for instance we talk about weak ties or strong ties, but do we have the correct description? There are fascinating questions that require development, and I’m eager to keep working on them.”   More

  • in

    M’Care and MIT students join forces to improve child health in Nigeria

    Through a collaboration between M’Care, a 2021 Health Security and Pandemics Solver team, and students from MIT, the landscape of child health care in Nigeria could undergo a transformative change, wherein the power of data is harnessed to improve child health outcomes in economically disadvantaged communities. 

    M’Care is a mobile application of Promane and Promade Limited, developed by Opeoluwa Ashimi, which gives community health workers in Nigeria real-time diagnostic and treatment support. The application also creates a dashboard that is available to government health officials to help identify disease trends and deploy timely interventions. As part of its work, M’Care is working to mitigate malnutrition by providing micronutrient powder, vitamin A, and zinc to children below the age of 5. To help deepen its impact, Ashimi decided to work with students in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) course 6.S897 (Machine Learning for Healthcare) — instructed by professors Peter Szolovits and Manolis Kellis — to leverage data in order to improve nutrient delivery to children across Nigeria. The collaboration also enabled students to see real-world applications for data analysis in the health care space.

    A meeting of minds: M’Care, MIT, and national health authorities

    “Our primary goal for collaborating with the ML for Health team was to spot the missing link in the continuum of care. With over 1 million cumulative consultations that qualify for a continuum of care evaluation, it was important to spot why patients could be lost to followup, prevent this, and ensure completion of care to successfully address the health needs of our patients,” says Ashimi, founder and CEO of M’Care.

    In May 2023, Ashimi attended a meeting that brought together key national stakeholders, including the representatives of the National Ministry of Health in Nigeria. This gathering served as a platform to discuss the profound impact of M’Care’s and ML for Health team’s collaboration — bolstered by data analysis provided on dosage regimens and a child’s age to enhance continuum of care with its attendant impact on children’s health, particularly in relation to brain development with regards to the use of essential micronutrients. The data analyzed by the students using ML methods that were shared during the meeting provided strong supporting evidence to individualize dosage regimens for children based on their age in months for the ANRIN project — a national nutrition project supported by the World Bank — as well as policy decisions to extend months of coverage for children, redefining health care practices in Nigeria.

    MIT students drive change by harnessing the power of data

    At the heart of this collaboration lies the contribution of MIT students. Armed with their dedication and skill in data analysis and machine learning, they played a pivotal role in helping M’Care analyze their data and prepare for their meeting with the Ministry of Health. Their most significant findings included ways to identify patients at risk of not completing their full course of micronutrient powder and/or vitamin A, and identifying gaps in M’Care’s data, such as postdated delivery dates and community demographics. These findings are already helping M’Care better plan its resources and adjust the scope of its program to ensure more children complete the intervention.

    Darcy Kim, an undergraduate at Wellesley College studying math and computer science, who is cross-registered for the MIT machine learning course, expresses enthusiasm about the practical applications found within the project: “To me, data and math is storytelling, and the story is why I love studying it. … I learned that data exploration involves asking questions about how the data is collected, and that surprising patterns that arise often have a qualitative explanation. Impactful research requires radical collaboration with the people the research intends to help. Otherwise, these qualitative explanations get lost in the numbers.”

    Joyce Luo, a first-year operations research PhD student at the Operations Research Center at MIT, shares similar thoughts about the project: “I learned the importance of understanding the context behind data to figure out what kind of analysis might be most impactful. This involves being in frequent contact with the company or organization who provides the data to learn as much as you can about how the data was collected and the people the analysis could help. Stepping back and looking at the bigger picture, rather than just focusing on accuracy or metrics, is extremely important.”

    Insights to implementation: A new era for micronutrient dosing

    As a direct result of M’Care’s collaboration with MIT, policymakers revamped the dosing scheme for essential micronutrient administration for children in Nigeria to prevent malnutrition. M’Care and MIT’s data analysis unearthed critical insights into the limited frequency of medical visits caused by late-age enrollment. 

    “One big takeaway for me was that the data analysis portion of the project — doing a deep dive into the data; understanding, analyzing, visualizing, and summarizing the data — can be just as important as building the machine learning models. M’Care shared our data analysis with the National Ministry of Health, and the insights from it drove them to change their dosing scheme and schedule for delivering micronutrient powder to young children. This really showed us the value of understanding and knowing your data before modeling,” shares Angela Lin, a second-year PhD student at the Operations Research Center.

    Armed with this knowledge, policymakers are eager to develop an optimized dosing scheme that caters to the unique needs of children in disadvantaged communities, ensuring maximum impact on their brain development and overall well-being.

    Siddharth Srivastava, M’Care’s corporate technology liaison, shares his gratitude for the MIT student’s input. “Collaborating with enthusiastic and driven students was both empowering and inspiring. Each of them brought unique perspectives and technical skills to the table. Their passion for applying machine learning to health care was evident in their unwavering dedication and proactive approach to problem-solving.”

    Forging a path to impact

    The collaboration between M’Care and MIT exemplifies the remarkable achievements that arise when academia, innovative problem-solvers, and policy authorities unite. By merging academic rigor with real-world expertise, this partnership has the potential to revolutionize child health care not only in Nigeria but also in similar contexts worldwide.

    “I believe applying innovative methods of machine learning, data gathering, instrumentation, and planning to real problems in the developing world can be highly effective for those countries and highly motivating for our students. I was happy to have such a project in our class portfolio this year and look forward to future opportunities,” says Peter Szolovits, professor of computer science and engineering at MIT.

    By harnessing the power of data, innovation, and collective expertise, this collaboration between M’Care and MIT has the potential to improve equitable child health care in Nigeria. “It has been so fulfilling to see how our team’s work has been able to create even the smallest positive impact in such a short period of time, and it has been amazing to work with a company like Promane and Promade Limited that is so knowledgeable and caring for the communities that they serve,” shares Elizabeth Whittier, a second-year PhD electrical engineering student at MIT. More

  • in

    Artificial intelligence for augmentation and productivity

    The MIT Stephen A. Schwarzman College of Computing has awarded seed grants to seven projects that are exploring how artificial intelligence and human-computer interaction can be leveraged to enhance modern work spaces to achieve better management and higher productivity.

    Funded by Andrew W. Houston ’05 and Dropbox Inc., the projects are intended to be interdisciplinary and bring together researchers from computing, social sciences, and management.

    The seed grants can enable the project teams to conduct research that leads to bigger endeavors in this rapidly evolving area, as well as build community around questions related to AI-augmented management.

    The seven selected projects and research leads include:

    “LLMex: Implementing Vannevar Bush’s Vision of the Memex Using Large Language Models,” led by Patti Maes of the Media Lab and David Karger of the Department of Electrical Engineering and Computer Science (EECS) and the Computer Science and Artificial Intelligence Laboratory (CSAIL). Inspired by Vannevar Bush’s Memex, this project proposes to design, implement, and test the concept of memory prosthetics using large language models (LLMs). The AI-based system will intelligently help an individual keep track of vast amounts of information, accelerate productivity, and reduce errors by automatically recording their work actions and meetings, supporting retrieval based on metadata and vague descriptions, and suggesting relevant, personalized information proactively based on the user’s current focus and context.

    “Using AI Agents to Simulate Social Scenarios,” led by John Horton of the MIT Sloan School of Management and Jacob Andreas of EECS and CSAIL. This project imagines the ability to easily simulate policies, organizational arrangements, and communication tools with AI agents before implementation. Tapping into the capabilities of modern LLMs to serve as a computational model of humans makes this vision of social simulation more realistic, and potentially more predictive.

    “Human Expertise in the Age of AI: Can We Have Our Cake and Eat it Too?” led by Manish Raghavan of MIT Sloan and EECS, and Devavrat Shah of EECS and the Laboratory for Information and Decision Systems. Progress in machine learning, AI, and in algorithmic decision aids has raised the prospect that algorithms may complement human decision-making in a wide variety of settings. Rather than replacing human professionals, this project sees a future where AI and algorithmic decision aids play a role that is complementary to human expertise.

    “Implementing Generative AI in U.S. Hospitals,” led by Julie Shah of the Department of Aeronautics and Astronautics and CSAIL, Retsef Levi of MIT Sloan and the Operations Research Center, Kate Kellog of MIT Sloan, and Ben Armstrong of the Industrial Performance Center. In recent years, studies have linked a rise in burnout from doctors and nurses in the United States with increased administrative burdens associated with electronic health records and other technologies. This project aims to develop a holistic framework to study how generative AI technologies can both increase productivity for organizations and improve job quality for workers in health care settings.

    “Generative AI Augmented Software Tools to Democratize Programming,” led by Harold Abelson of EECS and CSAIL, Cynthia Breazeal of the Media Lab, and Eric Klopfer of the Comparative Media Studies/Writing. Progress in generative AI over the past year is fomenting an upheaval in assumptions about future careers in software and deprecating the role of coding. This project will stimulate a similar transformation in computing education for those who have no prior technical training by creating a software tool that could eliminate much of the need for learners to deal with code when creating applications.

    “Acquiring Expertise and Societal Productivity in a World of Artificial Intelligence,” led by David Atkin and Martin Beraja of the Department of Economics, and Danielle Li of MIT Sloan. Generative AI is thought to augment the capabilities of workers performing cognitive tasks. This project seeks to better understand how the arrival of AI technologies may impact skill acquisition and productivity, and to explore complementary policy interventions that will allow society to maximize the gains from such technologies.

    “AI Augmented Onboarding and Support,” led by Tim Kraska of EECS and CSAIL, and Christoph Paus of the Department of Physics. While LLMs have made enormous leaps forward in recent years and are poised to fundamentally change the way students and professionals learn about new tools and systems, there is often a steep learning curve which people have to climb in order to make full use of the resource. To help mitigate the issue, this project proposes the development of new LLM-powered onboarding and support systems that will positively impact the way support teams operate and improve the user experience. More