More stories

  • in

    How an archeological approach can help leverage biased data in AI to improve medicine

    The classic computer science adage “garbage in, garbage out” lacks nuance when it comes to understanding biased medical data, argue computer science and bioethics professors from MIT, Johns Hopkins University, and the Alan Turing Institute in a new opinion piece published in a recent edition of the New England Journal of Medicine (NEJM). The rising popularity of artificial intelligence has brought increased scrutiny to the matter of biased AI models resulting in algorithmic discrimination, which the White House Office of Science and Technology identified as a key issue in their recent Blueprint for an AI Bill of Rights. 

    When encountering biased data, particularly for AI models used in medical settings, the typical response is to either collect more data from underrepresented groups or generate synthetic data making up for missing parts to ensure that the model performs equally well across an array of patient populations. But the authors argue that this technical approach should be augmented with a sociotechnical perspective that takes both historical and current social factors into account. By doing so, researchers can be more effective in addressing bias in public health. 

    “The three of us had been discussing the ways in which we often treat issues with data from a machine learning perspective as irritations that need to be managed with a technical solution,” recalls co-author Marzyeh Ghassemi, an assistant professor in electrical engineering and computer science and an affiliate of the Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Institute of Medical Engineering and Science (IMES). “We had used analogies of data as an artifact that gives a partial view of past practices, or a cracked mirror holding up a reflection. In both cases the information is perhaps not entirely accurate or favorable: Maybe we think that we behave in certain ways as a society — but when you actually look at the data, it tells a different story. We might not like what that story is, but once you unearth an understanding of the past you can move forward and take steps to address poor practices.” 

    Data as artifact 

    In the paper, titled “Considering Biased Data as Informative Artifacts in AI-Assisted Health Care,” Ghassemi, Kadija Ferryman, and Maxine Mackintosh make the case for viewing biased clinical data as “artifacts” in the same way anthropologists or archeologists would view physical objects: pieces of civilization-revealing practices, belief systems, and cultural values — in the case of the paper, specifically those that have led to existing inequities in the health care system. 

    For example, a 2019 study showed that an algorithm widely considered to be an industry standard used health-care expenditures as an indicator of need, leading to the erroneous conclusion that sicker Black patients require the same level of care as healthier white patients. What researchers found was algorithmic discrimination failing to account for unequal access to care.  

    In this instance, rather than viewing biased datasets or lack of data as problems that only require disposal or fixing, Ghassemi and her colleagues recommend the “artifacts” approach as a way to raise awareness around social and historical elements influencing how data are collected and alternative approaches to clinical AI development. 

    “If the goal of your model is deployment in a clinical setting, you should engage a bioethicist or a clinician with appropriate training reasonably early on in problem formulation,” says Ghassemi. “As computer scientists, we often don’t have a complete picture of the different social and historical factors that have gone into creating data that we’ll be using. We need expertise in discerning when models generalized from existing data may not work well for specific subgroups.” 

    When more data can actually harm performance 

    The authors acknowledge that one of the more challenging aspects of implementing an artifact-based approach is being able to assess whether data have been racially corrected: i.e., using white, male bodies as the conventional standard that other bodies are measured against. The opinion piece cites an example from the Chronic Kidney Disease Collaboration in 2021, which developed a new equation to measure kidney function because the old equation had previously been “corrected” under the blanket assumption that Black people have higher muscle mass. Ghassemi says that researchers should be prepared to investigate race-based correction as part of the research process. 

    In another recent paper accepted to this year’s International Conference on Machine Learning co-authored by Ghassemi’s PhD student Vinith Suriyakumar and University of California at San Diego Assistant Professor Berk Ustun, the researchers found that assuming the inclusion of personalized attributes like self-reported race improve the performance of ML models can actually lead to worse risk scores, models, and metrics for minority and minoritized populations.  

    “There’s no single right solution for whether or not to include self-reported race in a clinical risk score. Self-reported race is a social construct that is both a proxy for other information, and deeply proxied itself in other medical data. The solution needs to fit the evidence,” explains Ghassemi. 

    How to move forward 

    This is not to say that biased datasets should be enshrined, or biased algorithms don’t require fixing — quality training data is still key to developing safe, high-performance clinical AI models, and the NEJM piece highlights the role of the National Institutes of Health (NIH) in driving ethical practices.  

    “Generating high-quality, ethically sourced datasets is crucial for enabling the use of next-generation AI technologies that transform how we do research,” NIH acting director Lawrence Tabak stated in a press release when the NIH announced its $130 million Bridge2AI Program last year. Ghassemi agrees, pointing out that the NIH has “prioritized data collection in ethical ways that cover information we have not previously emphasized the value of in human health — such as environmental factors and social determinants. I’m very excited about their prioritization of, and strong investments towards, achieving meaningful health outcomes.” 

    Elaine Nsoesie, an associate professor at the Boston University of Public Health, believes there are many potential benefits to treating biased datasets as artifacts rather than garbage, starting with the focus on context. “Biases present in a dataset collected for lung cancer patients in a hospital in Uganda might be different from a dataset collected in the U.S. for the same patient population,” she explains. “In considering local context, we can train algorithms to better serve specific populations.” Nsoesie says that understanding the historical and contemporary factors shaping a dataset can make it easier to identify discriminatory practices that might be coded in algorithms or systems in ways that are not immediately obvious. She also notes that an artifact-based approach could lead to the development of new policies and structures ensuring that the root causes of bias in a particular dataset are eliminated. 

    “People often tell me that they are very afraid of AI, especially in health. They’ll say, ‘I’m really scared of an AI misdiagnosing me,’ or ‘I’m concerned it will treat me poorly,’” Ghassemi says. “I tell them, you shouldn’t be scared of some hypothetical AI in health tomorrow, you should be scared of what health is right now. If we take a narrow technical view of the data we extract from systems, we could naively replicate poor practices. That’s not the only option — realizing there is a problem is our first step towards a larger opportunity.”  More

  • in

    Helping computer vision and language models understand what they see

    Powerful machine-learning algorithms known as vision and language models, which learn to match text with images, have shown remarkable results when asked to generate captions or summarize videos.

    While these models excel at identifying objects, they often struggle to understand concepts, like object attributes or the arrangement of items in a scene. For instance, a vision and language model might recognize the cup and table in an image, but fail to grasp that the cup is sitting on the table.

    Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere have demonstrated a new technique that utilizes computer-generated data to help vision and language models overcome this shortcoming.

    The researchers created a synthetic dataset of images that depict a wide range of scenarios, object arrangements, and human actions, coupled with detailed text descriptions. They used this annotated dataset to “fix” vision and language models so they can learn concepts more effectively. Their technique ensures these models can still make accurate predictions when they see real images.

    When they tested models on concept understanding, the researchers found that their technique boosted accuracy by up to 10 percent. This could improve systems that automatically caption videos or enhance models that provide natural language answers to questions about images, with applications in fields like e-commerce or health care.

    “With this work, we are going beyond nouns in the sense that we are going beyond just the names of objects to more of the semantic concept of an object and everything around it. Our idea was that, when a machine-learning model sees objects in many different arrangements, it will have a better idea of how arrangement matters in a scene,” says Khaled Shehada, a graduate student in the Department of Electrical Engineering and Computer Science and co-author of a paper on this technique.

    Shehada wrote the paper with lead author Paola Cascante-Bonilla, a computer science graduate student at Rice University; Aude Oliva, director of strategic industry engagement at the MIT Schwarzman College of Computing, MIT director of the MIT-IBM Watson AI Lab, and a senior research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL); senior author Leonid Karlinsky, a research staff member in the MIT-IBM Watson AI Lab; and others at MIT, the MIT-IBM Watson AI Lab, Georgia Tech, Rice University, École des Ponts, Weizmann Institute of Science, and IBM Research. The paper will be presented at the International Conference on Computer Vision.

    Focusing on objects

    Vision and language models typically learn to identify objects in a scene, and can end up ignoring object attributes, such as color and size, or positional relationships, such as which object is on top of another object.

    This is due to the method with which these models are often trained, known as contrastive learning. This training method involves forcing a model to predict the correspondence between images and text. When comparing natural images, the objects in each scene tend to cause the most striking differences. (Perhaps one image shows a horse in a field while the second shows a sailboat on the water.)

    “Every image could be uniquely defined by the objects in the image. So, when you do contrastive learning, just focusing on the nouns and objects would solve the problem. Why would the model do anything differently?” says Karlinsky.

    The researchers sought to mitigate this problem by using synthetic data to fine-tune a vision and language model. The fine-tuning process involves tweaking a model that has already been trained to improve its performance on a specific task.

    They used a computer to automatically create synthetic videos with diverse 3D environments and objects, such as furniture and luggage, and added human avatars that interacted with the objects.

    Using individual frames of these videos, they generated nearly 800,000 photorealistic images, and then paired each with a detailed caption. The researchers developed a methodology for annotating every aspect of the image to capture object attributes, positional relationships, and human-object interactions clearly and consistently in dense captions.

    Because the researchers created the images, they could control the appearance and position of objects, as well as the gender, clothing, poses, and actions of the human avatars.

    “Synthetic data allows a lot of diversity. With real images, you might not have a lot of elephants in a room, but with synthetic data, you could actually have a pink elephant in a room with a human, if you want,” Cascante-Bonilla says.

    Synthetic data have other advantages, too. They are cheaper to generate than real data, yet the images are highly photorealistic. They also preserve privacy because no real humans are shown in the images. And, because data are produced automatically by a computer, they can be generated quickly in massive quantities.

    By using different camera viewpoints, or slightly changing the positions or attributes of objects, the researchers created a dataset with a far wider variety of scenarios than one would find in a natural dataset.

    Fine-tune, but don’t forget

    However, when one fine-tunes a model with synthetic data, there is a risk that model might “forget” what it learned when it was originally trained with real data.

    The researchers employed a few techniques to prevent this problem, such as adjusting the synthetic data so colors, lighting, and shadows more closely match those found in natural images. They also made adjustments to the model’s inner-workings after fine-tuning to further reduce any forgetfulness.

    Their synthetic dataset and fine-tuning strategy improved the ability of popular vision and language models to accurately recognize concepts by up to 10 percent. At the same time, the models did not forget what they had already learned.

    Now that they have shown how synthetic data can be used to solve this problem, the researchers want to identify ways to improve the visual quality and diversity of these data, as well as the underlying physics that makes synthetic scenes look realistic. In addition, they plan to test the limits of scalability, and investigate whether model improvement starts to plateau with larger and more diverse synthetic datasets.

    This research is funded, in part, by the U.S. Defense Advanced Research Projects Agency, the National Science Foundation, and the MIT-IBM Watson AI Lab. More

  • in

    Fast-tracking fusion energy’s arrival with AI and accessibility

    As the impacts of climate change continue to grow, so does interest in fusion’s potential as a clean energy source. While fusion reactions have been studied in laboratories since the 1930s, there are still many critical questions scientists must answer to make fusion power a reality, and time is of the essence. As part of their strategy to accelerate fusion energy’s arrival and reach carbon neutrality by 2050, the U.S. Department of Energy (DoE) has announced new funding for a project led by researchers at MIT’s Plasma Science and Fusion Center (PSFC) and four collaborating institutions.

    Cristina Rea, a research scientist and group leader at the PSFC, will serve as the primary investigator for the newly funded three-year collaboration to pilot the integration of fusion data into a system that can be read by AI-powered tools. The PSFC, together with scientists from William & Mary, the University of Wisconsin at Madison, Auburn University, and the nonprofit HDF Group, plan to create a holistic fusion data platform, the elements of which could offer unprecedented access for researchers, especially underrepresented students. The project aims to encourage diverse participation in fusion and data science, both in academia and the workforce, through outreach programs led by the group’s co-investigators, of whom four out of five are women. 

    The DoE’s award, part of a $29 million funding package for seven projects across 19 institutions, will support the group’s efforts to distribute data produced by fusion devices like the PSFC’s Alcator C-Mod, a donut-shaped “tokamak” that utilized powerful magnets to control and confine fusion reactions. Alcator C-Mod operated from 1991 to 2016 and its data are still being studied, thanks in part to the PSFC’s commitment to the free exchange of knowledge.

    Currently, there are nearly 50 public experimental magnetic confinement-type fusion devices; however, both historical and current data from these devices can be difficult to access. Some fusion databases require signing user agreements, and not all data are catalogued and organized the same way. Moreover, it can be difficult to leverage machine learning, a class of AI tools, for data analysis and to enable scientific discovery without time-consuming data reorganization. The result is fewer scientists working on fusion, greater barriers to discovery, and a bottleneck in harnessing AI to accelerate progress.

    The project’s proposed data platform addresses technical barriers by being FAIR — Findable, Interoperable, Accessible, Reusable — and by adhering to UNESCO’s Open Science (OS) recommendations to improve the transparency and inclusivity of science; all of the researchers’ deliverables will adhere to FAIR and OS principles, as required by the DoE. The platform’s databases will be built using MDSplusML, an upgraded version of the MDSplus open-source software developed by PSFC researchers in the 1980s to catalogue the results of Alcator C-Mod’s experiments. Today, nearly 40 fusion research institutes use MDSplus to store and provide external access to their fusion data. The release of MDSplusML aims to continue that legacy of open collaboration.

    The researchers intend to address barriers to participation for women and disadvantaged groups not only by improving general access to fusion data, but also through a subsidized summer school that will focus on topics at the intersection of fusion and machine learning, which will be held at William & Mary for the next three years.

    Of the importance of their research, Rea says, “This project is about responding to the fusion community’s needs and setting ourselves up for success. Scientific advancements in fusion are enabled via multidisciplinary collaboration and cross-pollination, so accessibility is absolutely essential. I think we all understand now that diverse communities have more diverse ideas, and they allow faster problem-solving.”

    The collaboration’s work also aligns with vital areas of research identified in the International Atomic Energy Agency’s “AI for Fusion” Coordinated Research Project (CRP). Rea was selected as the technical coordinator for the IAEA’s CRP emphasizing community engagement and knowledge access to accelerate fusion research and development. In a letter of support written for the group’s proposed project, the IAEA stated that, “the work [the researchers] will carry out […] will be beneficial not only to our CRP but also to the international fusion community in large.”

    PSFC Director and Hitachi America Professor of Engineering Dennis Whyte adds, “I am thrilled to see PSFC and our collaborators be at the forefront of applying new AI tools while simultaneously encouraging and enabling extraction of critical data from our experiments.”

    “Having the opportunity to lead such an important project is extremely meaningful, and I feel a responsibility to show that women are leaders in STEM,” says Rea. “We have an incredible team, strongly motivated to improve our fusion ecosystem and to contribute to making fusion energy a reality.” More

  • in

    Advancing social studies at MIT Sloan

    Around 2010, Facebook was a relatively small company with about 2,000 employees. So, when a PhD student named Dean Eckles showed up to serve an intership at the firm, he landed in a position with some real duties.

    Eckles essentially became the primary data scientist for the product manager who was overseeing the platform’s news feeds. That manager would pepper Eckles with questions. How exactly do people influence each other online? If Facebook tweaked its content-ranking algorithms, what would happen? What occurs when you show people more photos?

    As a doctoral candidate already studying social influence, Eckles was well-equipped to think about such questions, and being at Facebook gave him a lot of data to study them. 

    “If you show people more photos, they post more photos themselves,” Eckles says. “In turn, that affects the experience of all their friends. Plus they’re getting more likes and more comments. It affects everybody’s experience. But can you account for all of these compounding effects across the network?”

    Eckles, now an associate professor in the MIT Sloan School of Management and an affiliate faculty member of the Institute for Data, Systems, and Society, has made a career out of thinking carefully about that last question. Studying social networks allows Eckles to tackle significant questions involving, for example, the economic and political effects of social networks, the spread of misinformation, vaccine uptake during the Covid-19 crisis, and other aspects of the formation and shape of social networks. For instance, one study he co-authored this summer shows that people who either move between U.S. states, change high schools, or attend college out of state, wind up with more robust social networks, which are strongly associated with greater economic success.

    Eckles maintains another research channel focused on what scholars call “causal inference,” the methods and techniques that allow researchers to identify cause-and-effect connections in the world.

    “Learning about cause-and-effect relationships is core to so much science,” Eckles says. “In behavioral, social, economic, or biomedical science, it’s going to be hard. When you start thinking about humans, causality gets difficult. People do things strategically, and they’re electing into situations based on their own goals, so that complicates a lot of cause-and-effect relationships.”

    Eckles has now published dozens of papers in each of his different areas of work; for his research and teaching, Eckles received tenure from MIT last year.

    Five degrees and a job

    Eckles grew up in California, mostly near the Lake Tahoe area. He attended Stanford University as an undergraduate, arriving on campus in fall 2002 — and didn’t really leave for about a decade. Eckles has five degrees from Stanford. As an undergrad, he received a BA in philosophy and a BS in symbolic systems, an interdisciplinary major combining computer science, philosophy, psychology, and more. Eckles was set to attend Oxford University for graduate work in philosophy but changed his mind and stayed at Stanford for an MS in symbolic systems too. 

    “[Oxford] might have been a great experience, but I decided to focus more on the tech side of things,” he says.

    After receiving his first master’s degree, Eckles did take a year off from school and worked for Nokia, although the firm’s offices were adjacent to the Stanford campus and Eckles would sometimes stop and talk to faculty during the workday. Soon he was enrolled at Stanford again, this time earning his PhD in communication, in 2012, while receiving an MA in statistics the year before. His doctoral dissertation wound up being about peer influence in networks. PhD in hand, Eckles promptly headed back to Facebook, this time for three years as a full-time researcher.

     “They were really supportive of the work I was doing,” Eckles says.

    Still, Eckles remained interested in moving into academia, and joined the MIT faculty in 2017 with a position in MIT Sloan’s Marketing Group. The group consists of a set of scholars with far-ranging interests, from cognitive science to advertising to social network dynamics.

    “Our group reflects something deeper about the Sloan school and about MIT as well, an openness to doing things differently and not having to fit into narrowly defined tracks,” Eckles says.

    For that matter, MIT has many faculty in different domains who work on causal inference, and whose work Eckles quickly cites — including economists Victor Chernozhukov and Alberto Abadie, and Joshua Angrist, whose book “Mostly Harmless Econometrics” Eckles name-checks as an influence.

    “I’ve been fortunate in my career that causal inference turned out to be a hot area,” Eckles says. “But I think it’s hot for good reasons. People started to realize that, yes, causal inference is really important. There are economists, computer scientists, statisticians, and epidemiologists who are going to the same conferences and citing each other’s papers. There’s a lot happening.”

    How do networks form?

    These days, Eckles is interested in expanding the questions he works on. In the past, he has often studied existing social networks and looked at their effects. For instance: One study Eckles co-authored, examining the 2012 U.S. elections, found that get-out-the-vote messages work very well, especially when relayed via friends.

    That kind of study takes the existence of the network as a given, though. Another kind of research question is, as Eckles puts it, “How do social networks form and evolve? And what are the consequences of these network structures?” His recent study about social networks expanding as people move around and change schools is one example of research that digs into the core life experiences underlying social networks.

    “I’m excited about doing more on how these networks arise and what factors, including everything from personality to public transit, affect their formation,” Eckles says.

    Understanding more about how social networks form gets at key questions about social life and civic structure. Suppose research shows how some people develop and maintain beneficial connections in life; it’s possible that those insights could be applied to programs helping people in more disadvantaged situations realize some of the same opportunities.

    “We want to act on things,” Eckles says. “Sometimes people say, ‘We care about prediction.’ I would say, ‘We care about prediction under intervention.’ We want to predict what’s going to happen if we try different things.”

    Ultimately, Eckles reflects, “Trying to reason about the origins and maintenance of social networks, and the effects of networks, is interesting substantively and methodologically. Networks are super-high-dimensional objects, even just a single person’s network and all its connections. You have to summarize it, so for instance we talk about weak ties or strong ties, but do we have the correct description? There are fascinating questions that require development, and I’m eager to keep working on them.”   More

  • in

    New clean air and water labs to bring together researchers, policymakers to find climate solutions

    MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) is launching the Clean Air and Water Labs, with support from Community Jameel, to generate evidence-based solutions aimed at increasing access to clean air and water.

    Led by J-PAL’s Africa, Middle East and North Africa (MENA), and South Asia regional offices, the labs will partner with government agencies to bring together researchers and policymakers in areas where impactful clean air and water solutions are most urgently needed.

    Together, the labs aim to improve clean air and water access by informing the scaling of evidence-based policies and decisions of city, state, and national governments that serve nearly 260 million people combined.

    The Clean Air and Water Labs expand the work of J-PAL’s King Climate Action Initiative, building on the foundational support of King Philanthropies, which significantly expanded J-PAL’s work at the nexus of climate change and poverty alleviation worldwide. 

    Air pollution, water scarcity and the need for evidence 

    Africa, MENA, and South Asia are on the front lines of global air and water crises. 

    “There is no time to waste investing in solutions that do not achieve their desired effects,” says Iqbal Dhaliwal, global executive director of J-PAL. “By co-generating rigorous real-world evidence with researchers, policymakers can have the information they need to dedicate resources to scaling up solutions that have been shown to be effective.”

    In India, about 75 percent of households did not have drinking water on premises in 2018. In MENA, nearly 90 percent of children live in areas facing high or extreme water stress. Across Africa, almost 400 million people lack access to safe drinking water. 

    Simultaneously, air pollution is one of the greatest threats to human health globally. In India, extraordinary levels of air pollution are shortening the average life expectancy by five years. In Africa, rising indoor and ambient air pollution contributed to 1.1 million premature deaths in 2019. 

    There is increasing urgency to find high-impact and cost-effective solutions to the worsening threats to human health and resources caused by climate change. However, data and evidence on potential solutions are limited.

    Fostering collaboration to generate policy-relevant evidence 

    The Clean Air and Water Labs will foster deep collaboration between government stakeholders, J-PAL regional offices, and researchers in the J-PAL network. 

    Through the labs, J-PAL will work with policymakers to:

    co-diagnose the most pressing air and water challenges and opportunities for policy innovation;
    expand policymakers’ access to and use of high-quality air and water data;
    co-design potential solutions informed by existing evidence;
    co-generate evidence on promising solutions through rigorous evaluation, leveraging existing and new data sources; and
    support scaling of air and water policies and programs that are found to be effective through evaluation. 
    A research and scaling fund for each lab will prioritize resources for co-generated pilot studies, randomized evaluations, and scaling projects. 

    The labs will also collaborate with C40 Cities, a global network of mayors of the world’s leading cities that are united in action to confront the climate crisis, to share policy-relevant evidence and identify opportunities for potential new connections and research opportunities within India and across Africa.

    This model aims to strengthen the use of evidence in decision-making to ensure solutions are highly effective and to guide research to answer policymakers’ most urgent questions. J-PAL Africa, MENA, and South Asia’s strong on-the-ground presence will further bridge research and policy work by anchoring activities within local contexts. 

    “Communities across the world continue to face challenges in accessing clean air and water, a threat to human safety that has only been exacerbated by the climate crisis, along with rising temperatures and other hazards,” says George Richards, director of Community Jameel. “Through our collaboration with J-PAL and C40 in creating climate policy labs embedded in city, state, and national governments in Africa and South Asia, we are committed to innovative and science-based approaches that can help hundreds of millions of people enjoy healthier lives.”

    J-PAL Africa, MENA, and South Asia will formally launch Clean Air and Water Labs with government partners over the coming months. J-PAL is housed in the MIT Department of Economics, within the School of Humanities, Arts, and Social Sciences. More

  • in

    Supporting sustainability, digital health, and the future of work

    The MIT and Accenture Convergence Initiative for Industry and Technology has selected three new research projects that will receive support from the initiative. The research projects aim to accelerate progress in meeting complex societal needs through new business convergence insights in technology and innovation.

    Established in MIT’s School of Engineering and now in its third year, the MIT and Accenture Convergence Initiative is furthering its mission to bring together technological experts from across business and academia to share insights and learn from one another. Recently, Thomas W. Malone, the Patrick J. McGovern (1959) Professor of Management, joined the initiative as its first-ever faculty lead. The research projects relate to three of the initiative’s key focus areas: sustainability, digital health, and the future of work.

    “The solutions these research teams are developing have the potential to have tremendous impact,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “They embody the initiative’s focus on advancing data-driven research that addresses technology and industry convergence.”

    “The convergence of science and technology driven by advancements in generative AI, digital twins, quantum computing, and other technologies makes this an especially exciting time for Accenture and MIT to be undertaking this joint research,” says Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences. “Our three new research projects focusing on sustainability, digital health, and the future of work have the potential to help guide and shape future innovations that will benefit the way we work and live.”

    The MIT and Accenture Convergence Initiative charter project researchers are described below.

    Accelerating the journey to net zero with industrial clusters

    Jessika Trancik is a professor at the Institute for Data, Systems, and Society (IDSS). Trancik’s research examines the dynamic costs, performance, and environmental impacts of energy systems to inform climate policy and accelerate beneficial and equitable technology innovation. Trancik’s project aims to identify how industrial clusters can enable companies to derive greater value from decarbonization, potentially making companies more willing to invest in the clean energy transition.

    To meet the ambitious climate goals that have been set by countries around the world, rising greenhouse gas emissions trends must be rapidly reversed. Industrial clusters — geographically co-located or otherwise-aligned groups of companies representing one or more industries — account for a significant portion of greenhouse gas emissions globally. With major energy consumers “clustered” in proximity, industrial clusters provide a potential platform to scale low-carbon solutions by enabling the aggregation of demand and the coordinated investment in physical energy supply infrastructure.

    In addition to Trancik, the research team working on this project will include Aliza Khurram, a postdoc in IDSS; Micah Ziegler, an IDSS research scientist; Melissa Stark, global energy transition services lead at Accenture; Laura Sanderfer, strategy consulting manager at Accenture; and Maria De Miguel, strategy senior analyst at Accenture.

    Eliminating childhood obesity

    Anette “Peko” Hosoi is the Neil and Jane Pappalardo Professor of Mechanical Engineering. A common theme in her work is the fundamental study of shape, kinematic, and rheological optimization of biological systems with applications to the emergent field of soft robotics. Her project will use both data from existing studies and synthetic data to create a return-on-investment (ROI) calculator for childhood obesity interventions so that companies can identify earlier returns on their investment beyond reduced health-care costs.

    Childhood obesity is too prevalent to be solved by a single company, industry, drug, application, or program. In addition to the physical and emotional impact on children, society bears a cost through excess health care spending, lost workforce productivity, poor school performance, and increased family trauma. Meaningful solutions require multiple organizations, representing different parts of society, working together with a common understanding of the problem, the economic benefits, and the return on investment. ROI is particularly difficult to defend for any single organization because investment and return can be separated by many years and involve asymmetric investments, returns, and allocation of risk. Hosoi’s project will consider the incentives for a particular entity to invest in programs in order to reduce childhood obesity.

    Hosoi will be joined by graduate students Pragya Neupane and Rachael Kha, both of IDSS, as well a team from Accenture that includes Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences; Kaveh Safavi, senior managing director in Accenture Health Industry; and Elizabeth Naik, global health and public service research lead.

    Generating innovative organizational configurations and algorithms for dealing with the problem of post-pandemic employment

    Thomas Malone is the Patrick J. McGovern (1959) Professor of Management at the MIT Sloan School of Management and the founding director of the MIT Center for Collective Intelligence. His research focuses on how new organizations can be designed to take advantage of the possibilities provided by information technology. Malone will be joined in this project by John Horton, the Richard S. Leghorn (1939) Career Development Professor at the MIT Sloan School of Management, whose research focuses on the intersection of labor economics, market design, and information systems. Malone and Horton’s project will look to reshape the future of work with the help of lessons learned in the wake of the pandemic.

    The Covid-19 pandemic has been a major disrupter of work and employment, and it is not at all obvious how governments, businesses, and other organizations should manage the transition to a desirable state of employment as the pandemic recedes. Using natural language processing algorithms such as GPT-4, this project will look to identify new ways that companies can use AI to better match applicants to necessary jobs, create new types of jobs, assess skill training needed, and identify interventions to help include women and other groups whose employment was disproportionately affected by the pandemic.

    In addition to Malone and Horton, the research team will include Rob Laubacher, associate director and research scientist at the MIT Center for Collective Intelligence, and Kathleen Kennedy, executive director at the MIT Center for Collective Intelligence and senior director at MIT Horizon. The team will also include Nitu Nivedita, managing director of artificial intelligence at Accenture, and Thomas Hancock, data science senior manager at Accenture. More

  • in

    M’Care and MIT students join forces to improve child health in Nigeria

    Through a collaboration between M’Care, a 2021 Health Security and Pandemics Solver team, and students from MIT, the landscape of child health care in Nigeria could undergo a transformative change, wherein the power of data is harnessed to improve child health outcomes in economically disadvantaged communities. 

    M’Care is a mobile application of Promane and Promade Limited, developed by Opeoluwa Ashimi, which gives community health workers in Nigeria real-time diagnostic and treatment support. The application also creates a dashboard that is available to government health officials to help identify disease trends and deploy timely interventions. As part of its work, M’Care is working to mitigate malnutrition by providing micronutrient powder, vitamin A, and zinc to children below the age of 5. To help deepen its impact, Ashimi decided to work with students in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) course 6.S897 (Machine Learning for Healthcare) — instructed by professors Peter Szolovits and Manolis Kellis — to leverage data in order to improve nutrient delivery to children across Nigeria. The collaboration also enabled students to see real-world applications for data analysis in the health care space.

    A meeting of minds: M’Care, MIT, and national health authorities

    “Our primary goal for collaborating with the ML for Health team was to spot the missing link in the continuum of care. With over 1 million cumulative consultations that qualify for a continuum of care evaluation, it was important to spot why patients could be lost to followup, prevent this, and ensure completion of care to successfully address the health needs of our patients,” says Ashimi, founder and CEO of M’Care.

    In May 2023, Ashimi attended a meeting that brought together key national stakeholders, including the representatives of the National Ministry of Health in Nigeria. This gathering served as a platform to discuss the profound impact of M’Care’s and ML for Health team’s collaboration — bolstered by data analysis provided on dosage regimens and a child’s age to enhance continuum of care with its attendant impact on children’s health, particularly in relation to brain development with regards to the use of essential micronutrients. The data analyzed by the students using ML methods that were shared during the meeting provided strong supporting evidence to individualize dosage regimens for children based on their age in months for the ANRIN project — a national nutrition project supported by the World Bank — as well as policy decisions to extend months of coverage for children, redefining health care practices in Nigeria.

    MIT students drive change by harnessing the power of data

    At the heart of this collaboration lies the contribution of MIT students. Armed with their dedication and skill in data analysis and machine learning, they played a pivotal role in helping M’Care analyze their data and prepare for their meeting with the Ministry of Health. Their most significant findings included ways to identify patients at risk of not completing their full course of micronutrient powder and/or vitamin A, and identifying gaps in M’Care’s data, such as postdated delivery dates and community demographics. These findings are already helping M’Care better plan its resources and adjust the scope of its program to ensure more children complete the intervention.

    Darcy Kim, an undergraduate at Wellesley College studying math and computer science, who is cross-registered for the MIT machine learning course, expresses enthusiasm about the practical applications found within the project: “To me, data and math is storytelling, and the story is why I love studying it. … I learned that data exploration involves asking questions about how the data is collected, and that surprising patterns that arise often have a qualitative explanation. Impactful research requires radical collaboration with the people the research intends to help. Otherwise, these qualitative explanations get lost in the numbers.”

    Joyce Luo, a first-year operations research PhD student at the Operations Research Center at MIT, shares similar thoughts about the project: “I learned the importance of understanding the context behind data to figure out what kind of analysis might be most impactful. This involves being in frequent contact with the company or organization who provides the data to learn as much as you can about how the data was collected and the people the analysis could help. Stepping back and looking at the bigger picture, rather than just focusing on accuracy or metrics, is extremely important.”

    Insights to implementation: A new era for micronutrient dosing

    As a direct result of M’Care’s collaboration with MIT, policymakers revamped the dosing scheme for essential micronutrient administration for children in Nigeria to prevent malnutrition. M’Care and MIT’s data analysis unearthed critical insights into the limited frequency of medical visits caused by late-age enrollment. 

    “One big takeaway for me was that the data analysis portion of the project — doing a deep dive into the data; understanding, analyzing, visualizing, and summarizing the data — can be just as important as building the machine learning models. M’Care shared our data analysis with the National Ministry of Health, and the insights from it drove them to change their dosing scheme and schedule for delivering micronutrient powder to young children. This really showed us the value of understanding and knowing your data before modeling,” shares Angela Lin, a second-year PhD student at the Operations Research Center.

    Armed with this knowledge, policymakers are eager to develop an optimized dosing scheme that caters to the unique needs of children in disadvantaged communities, ensuring maximum impact on their brain development and overall well-being.

    Siddharth Srivastava, M’Care’s corporate technology liaison, shares his gratitude for the MIT student’s input. “Collaborating with enthusiastic and driven students was both empowering and inspiring. Each of them brought unique perspectives and technical skills to the table. Their passion for applying machine learning to health care was evident in their unwavering dedication and proactive approach to problem-solving.”

    Forging a path to impact

    The collaboration between M’Care and MIT exemplifies the remarkable achievements that arise when academia, innovative problem-solvers, and policy authorities unite. By merging academic rigor with real-world expertise, this partnership has the potential to revolutionize child health care not only in Nigeria but also in similar contexts worldwide.

    “I believe applying innovative methods of machine learning, data gathering, instrumentation, and planning to real problems in the developing world can be highly effective for those countries and highly motivating for our students. I was happy to have such a project in our class portfolio this year and look forward to future opportunities,” says Peter Szolovits, professor of computer science and engineering at MIT.

    By harnessing the power of data, innovation, and collective expertise, this collaboration between M’Care and MIT has the potential to improve equitable child health care in Nigeria. “It has been so fulfilling to see how our team’s work has been able to create even the smallest positive impact in such a short period of time, and it has been amazing to work with a company like Promane and Promade Limited that is so knowledgeable and caring for the communities that they serve,” shares Elizabeth Whittier, a second-year PhD electrical engineering student at MIT. More

  • in

    Artificial intelligence for augmentation and productivity

    The MIT Stephen A. Schwarzman College of Computing has awarded seed grants to seven projects that are exploring how artificial intelligence and human-computer interaction can be leveraged to enhance modern work spaces to achieve better management and higher productivity.

    Funded by Andrew W. Houston ’05 and Dropbox Inc., the projects are intended to be interdisciplinary and bring together researchers from computing, social sciences, and management.

    The seed grants can enable the project teams to conduct research that leads to bigger endeavors in this rapidly evolving area, as well as build community around questions related to AI-augmented management.

    The seven selected projects and research leads include:

    “LLMex: Implementing Vannevar Bush’s Vision of the Memex Using Large Language Models,” led by Patti Maes of the Media Lab and David Karger of the Department of Electrical Engineering and Computer Science (EECS) and the Computer Science and Artificial Intelligence Laboratory (CSAIL). Inspired by Vannevar Bush’s Memex, this project proposes to design, implement, and test the concept of memory prosthetics using large language models (LLMs). The AI-based system will intelligently help an individual keep track of vast amounts of information, accelerate productivity, and reduce errors by automatically recording their work actions and meetings, supporting retrieval based on metadata and vague descriptions, and suggesting relevant, personalized information proactively based on the user’s current focus and context.

    “Using AI Agents to Simulate Social Scenarios,” led by John Horton of the MIT Sloan School of Management and Jacob Andreas of EECS and CSAIL. This project imagines the ability to easily simulate policies, organizational arrangements, and communication tools with AI agents before implementation. Tapping into the capabilities of modern LLMs to serve as a computational model of humans makes this vision of social simulation more realistic, and potentially more predictive.

    “Human Expertise in the Age of AI: Can We Have Our Cake and Eat it Too?” led by Manish Raghavan of MIT Sloan and EECS, and Devavrat Shah of EECS and the Laboratory for Information and Decision Systems. Progress in machine learning, AI, and in algorithmic decision aids has raised the prospect that algorithms may complement human decision-making in a wide variety of settings. Rather than replacing human professionals, this project sees a future where AI and algorithmic decision aids play a role that is complementary to human expertise.

    “Implementing Generative AI in U.S. Hospitals,” led by Julie Shah of the Department of Aeronautics and Astronautics and CSAIL, Retsef Levi of MIT Sloan and the Operations Research Center, Kate Kellog of MIT Sloan, and Ben Armstrong of the Industrial Performance Center. In recent years, studies have linked a rise in burnout from doctors and nurses in the United States with increased administrative burdens associated with electronic health records and other technologies. This project aims to develop a holistic framework to study how generative AI technologies can both increase productivity for organizations and improve job quality for workers in health care settings.

    “Generative AI Augmented Software Tools to Democratize Programming,” led by Harold Abelson of EECS and CSAIL, Cynthia Breazeal of the Media Lab, and Eric Klopfer of the Comparative Media Studies/Writing. Progress in generative AI over the past year is fomenting an upheaval in assumptions about future careers in software and deprecating the role of coding. This project will stimulate a similar transformation in computing education for those who have no prior technical training by creating a software tool that could eliminate much of the need for learners to deal with code when creating applications.

    “Acquiring Expertise and Societal Productivity in a World of Artificial Intelligence,” led by David Atkin and Martin Beraja of the Department of Economics, and Danielle Li of MIT Sloan. Generative AI is thought to augment the capabilities of workers performing cognitive tasks. This project seeks to better understand how the arrival of AI technologies may impact skill acquisition and productivity, and to explore complementary policy interventions that will allow society to maximize the gains from such technologies.

    “AI Augmented Onboarding and Support,” led by Tim Kraska of EECS and CSAIL, and Christoph Paus of the Department of Physics. While LLMs have made enormous leaps forward in recent years and are poised to fundamentally change the way students and professionals learn about new tools and systems, there is often a steep learning curve which people have to climb in order to make full use of the resource. To help mitigate the issue, this project proposes the development of new LLM-powered onboarding and support systems that will positively impact the way support teams operate and improve the user experience. More