More stories

  • in

    Supporting sustainability, digital health, and the future of work

    The MIT and Accenture Convergence Initiative for Industry and Technology has selected three new research projects that will receive support from the initiative. The research projects aim to accelerate progress in meeting complex societal needs through new business convergence insights in technology and innovation.

    Established in MIT’s School of Engineering and now in its third year, the MIT and Accenture Convergence Initiative is furthering its mission to bring together technological experts from across business and academia to share insights and learn from one another. Recently, Thomas W. Malone, the Patrick J. McGovern (1959) Professor of Management, joined the initiative as its first-ever faculty lead. The research projects relate to three of the initiative’s key focus areas: sustainability, digital health, and the future of work.

    “The solutions these research teams are developing have the potential to have tremendous impact,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “They embody the initiative’s focus on advancing data-driven research that addresses technology and industry convergence.”

    “The convergence of science and technology driven by advancements in generative AI, digital twins, quantum computing, and other technologies makes this an especially exciting time for Accenture and MIT to be undertaking this joint research,” says Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences. “Our three new research projects focusing on sustainability, digital health, and the future of work have the potential to help guide and shape future innovations that will benefit the way we work and live.”

    The MIT and Accenture Convergence Initiative charter project researchers are described below.

    Accelerating the journey to net zero with industrial clusters

    Jessika Trancik is a professor at the Institute for Data, Systems, and Society (IDSS). Trancik’s research examines the dynamic costs, performance, and environmental impacts of energy systems to inform climate policy and accelerate beneficial and equitable technology innovation. Trancik’s project aims to identify how industrial clusters can enable companies to derive greater value from decarbonization, potentially making companies more willing to invest in the clean energy transition.

    To meet the ambitious climate goals that have been set by countries around the world, rising greenhouse gas emissions trends must be rapidly reversed. Industrial clusters — geographically co-located or otherwise-aligned groups of companies representing one or more industries — account for a significant portion of greenhouse gas emissions globally. With major energy consumers “clustered” in proximity, industrial clusters provide a potential platform to scale low-carbon solutions by enabling the aggregation of demand and the coordinated investment in physical energy supply infrastructure.

    In addition to Trancik, the research team working on this project will include Aliza Khurram, a postdoc in IDSS; Micah Ziegler, an IDSS research scientist; Melissa Stark, global energy transition services lead at Accenture; Laura Sanderfer, strategy consulting manager at Accenture; and Maria De Miguel, strategy senior analyst at Accenture.

    Eliminating childhood obesity

    Anette “Peko” Hosoi is the Neil and Jane Pappalardo Professor of Mechanical Engineering. A common theme in her work is the fundamental study of shape, kinematic, and rheological optimization of biological systems with applications to the emergent field of soft robotics. Her project will use both data from existing studies and synthetic data to create a return-on-investment (ROI) calculator for childhood obesity interventions so that companies can identify earlier returns on their investment beyond reduced health-care costs.

    Childhood obesity is too prevalent to be solved by a single company, industry, drug, application, or program. In addition to the physical and emotional impact on children, society bears a cost through excess health care spending, lost workforce productivity, poor school performance, and increased family trauma. Meaningful solutions require multiple organizations, representing different parts of society, working together with a common understanding of the problem, the economic benefits, and the return on investment. ROI is particularly difficult to defend for any single organization because investment and return can be separated by many years and involve asymmetric investments, returns, and allocation of risk. Hosoi’s project will consider the incentives for a particular entity to invest in programs in order to reduce childhood obesity.

    Hosoi will be joined by graduate students Pragya Neupane and Rachael Kha, both of IDSS, as well a team from Accenture that includes Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences; Kaveh Safavi, senior managing director in Accenture Health Industry; and Elizabeth Naik, global health and public service research lead.

    Generating innovative organizational configurations and algorithms for dealing with the problem of post-pandemic employment

    Thomas Malone is the Patrick J. McGovern (1959) Professor of Management at the MIT Sloan School of Management and the founding director of the MIT Center for Collective Intelligence. His research focuses on how new organizations can be designed to take advantage of the possibilities provided by information technology. Malone will be joined in this project by John Horton, the Richard S. Leghorn (1939) Career Development Professor at the MIT Sloan School of Management, whose research focuses on the intersection of labor economics, market design, and information systems. Malone and Horton’s project will look to reshape the future of work with the help of lessons learned in the wake of the pandemic.

    The Covid-19 pandemic has been a major disrupter of work and employment, and it is not at all obvious how governments, businesses, and other organizations should manage the transition to a desirable state of employment as the pandemic recedes. Using natural language processing algorithms such as GPT-4, this project will look to identify new ways that companies can use AI to better match applicants to necessary jobs, create new types of jobs, assess skill training needed, and identify interventions to help include women and other groups whose employment was disproportionately affected by the pandemic.

    In addition to Malone and Horton, the research team will include Rob Laubacher, associate director and research scientist at the MIT Center for Collective Intelligence, and Kathleen Kennedy, executive director at the MIT Center for Collective Intelligence and senior director at MIT Horizon. The team will also include Nitu Nivedita, managing director of artificial intelligence at Accenture, and Thomas Hancock, data science senior manager at Accenture. More

  • in

    Q&A: Are far-reaching fires the new normal?

    Where there’s smoke, there is fire. But with climate change, larger and longer-burning wildfires are sending smoke farther from their source, often to places that are unaccustomed to the exposure. That’s been the case this week, as smoke continues to drift south from massive wildfires in Canada, prompting warnings of hazardous air quality, and poor visibility in states across New England, the mid-Atlantic, and the Midwest.

    As wildfire season is just getting going, many may be wondering: Are the air-polluting effects of wildfires a new normal?

    MIT News spoke with Professor Colette Heald of the Department of Civil and Environmental Engineering and the Department of Earth, Atmospheric and Planetary Sciences, and Professor Noelle Selin of the Institute for Data, Systems and Society and the Department of Earth, Atmospheric and Planetary Sciences. Heald specializes in atmospheric chemistry and has studied the climate and health effects associated with recent wildfires, while Selin works with atmospheric models to track air pollutants around the world, which she uses to inform policy decisions on mitigating  pollution and climate change. The researchers shared some of their insights on the immediate impacts of Canada’s current wildfires and what downwind regions may expect in the coming months, as the wildfire season stretches into summer.  

    Q: What role has climate change and human activity played in the wildfires we’ve seen so far this year?

    Heald: Unusually warm and dry conditions have dramatically increased fire susceptibility in Canada this year. Human-induced climate change makes such dry and warm conditions more likely. Smoke from fires in Alberta and Nova Scotia in May, and Quebec in early June, has led to some of the worst air quality conditions measured locally in Canada. This same smoke has been transported into the United States and degraded air quality here as well. Local officials have determined that ignitions have been associated with lightning strikes, but human activity has also played a role igniting some of the fires in Alberta.

    Q: What can we expect for the coming months in terms of the pattern of wildfires and their associated air pollution across the United States?

    Heald: The Government of Canada is projecting higher-than-normal fire activity throughout the 2023 fire season. Fire susceptibility will continue to respond to changing weather conditions, and whether the U.S. is impacted will depend on the winds and how air is transported across those regions. So far, the fire season in the United States has been below average, but fire risk is expected to increase modestly through the summer, so we may see local smoke influences as well.

    Q: How has air pollution from wildfires affected human health in the U.S. this year so far?

    Selin: The pollutant of most concern in wildfire smoke is fine particulate matter (PM2.5) – fine particles in the atmosphere that can be inhaled deep into the lungs, causing health damages. Exposure to PM2.5 causes respiratory and cardiovascular damage, including heart attacks and premature deaths. It can also cause symptoms like coughing and difficulty breathing. In New England this week, people have been breathing much higher concentrations of PM2.5 than usual. People who are particularly vulnerable to the effects are likely experiencing more severe impacts, such as older people and people with underlying conditions. But PM2.5 affects everyone. While the number and impact of wildfires varies from year to year, the associated air pollution from them generally lead to tens of thousands of premature deaths in the U.S. overall annually. There is also some evidence that PM2.5 from fires could be particularly damaging to health.

    While we in New England usually have relatively lower levels of pollution, it’s important also to note that some cities around the globe experience very high PM2.5 on a regular basis, not only from wildfires, but other sources such as power plants and industry. So, while we’re feeling the effects over the past few days, we should remember the broader importance of reducing PM2.5 levels overall for human health everywhere.

    Q: While firefighters battle fires directly this wildfire season, what can we do to reduce the effects of associated air pollution? And what can we do in the long-term, to prevent or reduce wildfire impacts?

    Selin: In the short term, protecting yourself from the impacts of PM2.5 is important. Limiting time outdoors, avoiding outdoor exercise, and wearing a high-quality mask are some strategies that can minimize exposure. Air filters can help reduce the concentrations of particles in indoor air. Taking measures to avoid exposure is particularly important for vulnerable groups. It’s also important to note that these strategies aren’t equally possible for everyone (for example, people who work outside) — stressing the importance of developing new strategies to address the underlying causes of increasing wildfires.

    Over the long term, mitigating climate change is important — because warm and dry conditions lead to wildfires, warming increases fire risk. Preventing the fires that are ignited by people or human activities can help.  Another way that damages can be mitigated in the longer term is by exploring land management strategies that could help manage fire intensity. More

  • in

    J-WAFS announces 2023 seed grant recipients

    Today, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) announced its ninth round of seed grants to support innovative research projects at MIT. The grants are designed to fund research efforts that tackle challenges related to water and food for human use, with the ultimate goal of creating meaningful impact as the world population continues to grow and the planet undergoes significant climate and environmental changes.Ten new projects led by 15 researchers from seven different departments will be supported this year. The projects address a range of challenges by employing advanced materials, technology innovations, and new approaches to resource management. The new projects aim to remove harmful chemicals from water sources, develop monitoring and other systems to help manage various aquaculture industries, optimize water purification materials, and more.“The seed grant program is J-WAFS’ flagship grant initiative,” says J-WAFS executive director Renee J. Robins. “The funding is intended to spur groundbreaking MIT research addressing complex issues that are challenging our water and food systems. The 10 projects selected this year show great promise, and we look forward to the progress and accomplishments these talented researchers will make,” she adds.The 2023 J-WAFS seed grant researchers and their projects are:Sara Beery, an assistant professor in the Department of Electrical Engineering and Computer Science (EECS), is building the first completely automated system to estimate the size of salmon populations in the Pacific Northwest (PNW).Salmon are a keystone species in the PNW, feeding human populations for the last 7,500 years at least. However, overfishing, habitat loss, and climate change threaten extinction of salmon populations across the region. Accurate salmon counts during their seasonal migration to their natal river to spawn are essential for fisheries’ regulation and management but are limited by human capacity. Fish population monitoring is a widespread challenge in the United States and worldwide. Beery and her team are working to build a system that will provide a detailed picture of the state of salmon populations in unprecedented, spatial, and temporal resolution by combining sonar sensors and computer vision and machine learning (CVML) techniques. The sonar will capture individual fish as they swim upstream and CVML will train accurate algorithms to interpret the sonar video for detecting, tracking, and counting fish automatically while adapting to changing river conditions and fish densities.Another aquaculture project is being led by Michael Triantafyllou, the Henry L. and Grace Doherty Professor in Ocean Science and Engineering in the Department of Mechanical Engineering, and Robert Vincent, the assistant director at MIT’s Sea Grant Program. They are working with Otto Cordero, an associate professor in the Department of Civil and Environmental Engineering, to control harmful bacteria blooms in aquaculture algae feed production.

    Aquaculture in the United States represents a $1.5 billion industry annually and helps support 1.7 million jobs, yet many American hatcheries are not able to keep up with demand. One barrier to aquaculture production is the high degree of variability in survival rates, most likely caused by a poorly controlled microbiome that leads to bacterial infections and sub-optimal feed efficiency. Triantafyllou, Vincent, and Cordero plan to monitor the microbiome composition of a shellfish hatchery in order to identify possible causing agents of mortality, as well as beneficial microbes. They hope to pair microbe data with detail phenotypic information about the animal population to generate rapid diagnostic tests and explore the potential for microbiome therapies to protect larvae and prevent future outbreaks. The researchers plan to transfer their findings and technology to the local and regional aquaculture community to ensure healthy aquaculture production that will support the expansion of the U.S. aquaculture industry.

    David Des Marais is the Cecil and Ida Green Career Development Professor in the Department of Civil and Environmental Engineering. His 2023 J-WAFS project seeks to understand plant growth responses to elevated carbon dioxide (CO2) in the atmosphere, in the hopes of identifying breeding strategies that maximize crop yield under future CO2 scenarios.Today’s crop plants experience higher atmospheric CO2 than 20 or 30 years ago. Crops such as wheat, oat, barley, and rice typically increase their growth rate and biomass when grown at experimentally elevated atmospheric CO2. This is known as the so-called “CO2 fertilization effect.” However, not all plant species respond to rising atmospheric CO2 with increased growth, and for the ones that do, increased growth doesn’t necessarily correspond to increased crop yield. Using specially built plant growth chambers that can control the concentration of CO2, Des Marais will explore how CO2 availability impacts the development of tillers (branches) in the grass species Brachypodium. He will study how gene expression controls tiller development, and whether this is affected by the growing environment. The tillering response refers to how many branches a plant produces, which sets a limit on how much grain it can yield. Therefore, optimizing the tillering response to elevated CO2 could greatly increase yield. Des Marais will also look at the complete genome sequence of Brachypodium, wheat, oat, and barley to help identify genes relevant for branch growth.Darcy McRose, an assistant professor in the Department of Civil and Environmental Engineering, is researching whether a combination of plant metabolites and soil bacteria can be used to make mineral-associated phosphorus more bioavailable.The nutrient phosphorus is essential for agricultural plant growth, but when added as a fertilizer, phosphorus sticks to the surface of soil minerals, decreasing bioavailability, limiting plant growth, and accumulating residual phosphorus. Heavily fertilized agricultural soils often harbor large reservoirs of this type of mineral-associated “legacy” phosphorus. Redox transformations are one chemical process that can liberate mineral-associated phosphorus. However, this needs to be carefully controlled, as overly mobile phosphorus can lead to runoff and pollution of natural waters. Ideally, phosphorus would be made bioavailable when plants need it and immobile when they don’t. Many plants make small metabolites called coumarins that might be able to solubilize mineral-adsorbed phosphorus and be activated and inactivated under different conditions. McRose will use laboratory experiments to determine whether a combination of plant metabolites and soil bacteria can be used as a highly efficient and tunable system for phosphorus solubilization. She also aims to develop an imaging platform to investigate exchanges of phosphorus between plants and soil microbes.Many of the 2023 seed grants will support innovative technologies to monitor, quantify, and remediate various kinds of pollutants found in water. Two of the new projects address the problem of per- and polyfluoroalkyl substances (PFAS), human-made chemicals that have recently emerged as a global health threat. Known as “forever chemicals,” PFAS are used in many manufacturing processes. These chemicals are known to cause significant health issues including cancer, and they have become pervasive in soil, dust, air, groundwater, and drinking water. Unfortunately, the physical and chemical properties of PFAS render them difficult to detect and remove.Aristide Gumyusenge, the Merton C. Assistant Professor of Materials Science and Engineering, is using metal-organic frameworks for low-cost sensing and capture of PFAS. Most metal-organic frameworks (MOFs) are synthesized as particles, which complicates their high accuracy sensing performance due to defects such as intergranular boundaries. Thin, film-based electronic devices could enable the use of MOFs for many applications, especially chemical sensing. Gumyusenge’s project aims to design test kits based on two-dimensional conductive MOF films for detecting PFAS in drinking water. In early demonstrations, Gumyusenge and his team showed that these MOF films can sense PFAS at low concentrations. They will continue to iterate using a computation-guided approach to tune sensitivity and selectivity of the kits with the goal of deploying them in real-world scenarios.Carlos Portela, the Brit (1961) and Alex (1949) d’Arbeloff Career Development Professor in the Department of Mechanical Engineering, and Ariel Furst, the Cook Career Development Professor in the Department of Chemical Engineering, are building novel architected materials to act as filters for the removal of PFAS from water. Portela and Furst will design and fabricate nanoscale materials that use activated carbon and porous polymers to create a physical adsorption system. They will engineer the materials to have tunable porosities and morphologies that can maximize interactions between contaminated water and functionalized surfaces, while providing a mechanically robust system.Rohit Karnik is a Tata Professor and interim co-department head of the Department of Mechanical Engineering. He is working on another technology, his based on microbead sensors, to rapidly measure and monitor trace contaminants in water.Water pollution from both biological and chemical contaminants contributes to an estimated 1.36 million deaths annually. Chemical contaminants include pesticides and herbicides, heavy metals like lead, and compounds used in manufacturing. These emerging contaminants can be found throughout the environment, including in water supplies. The Environmental Protection Agency (EPA) in the United States sets recommended water quality standards, but states are responsible for developing their own monitoring criteria and systems, which must be approved by the EPA every three years. However, the availability of data on regulated chemicals and on candidate pollutants is limited by current testing methods that are either insensitive or expensive and laboratory-based, requiring trained scientists and technicians. Karnik’s project proposes a simple, self-contained, portable system for monitoring trace and emerging pollutants in water, making it suitable for field studies. The concept is based on multiplexed microbead-based sensors that use thermal or gravitational actuation to generate a signal. His proposed sandwich assay, a testing format that is appealing for environmental sensing, will enable both single-use and continuous monitoring. The hope is that the bead-based assays will increase the ease and reach of detecting and quantifying trace contaminants in water for both personal and industrial scale applications.Alexander Radosevich, a professor in the Department of Chemistry, and Timothy Swager, the John D. MacArthur Professor of Chemistry, are teaming up to create rapid, cost-effective, and reliable techniques for on-site arsenic detection in water.Arsenic contamination of groundwater is a problem that affects as many as 500 million people worldwide. Arsenic poisoning can lead to a range of severe health problems from cancer to cardiovascular and neurological impacts. Both the EPA and the World Health Organization have established that 10 parts per billion is a practical threshold for arsenic in drinking water, but measuring arsenic in water at such low levels is challenging, especially in resource-limited environments where access to sensitive laboratory equipment may not be readily accessible. Radosevich and Swager plan to develop reaction-based chemical sensors that bind and extract electrons from aqueous arsenic. In this way, they will exploit the inherent reactivity of aqueous arsenic to selectively detect and quantify it. This work will establish the chemical basis for a new method of detecting trace arsenic in drinking water.Rajeev Ram is a professor in the Department of Electrical Engineering and Computer Science. His J-WAFS research will advance a robust technology for monitoring nitrogen-containing pollutants, which threaten over 15,000 bodies of water in the United States alone.Nitrogen in the form of nitrate, nitrite, ammonia, and urea can run off from agricultural fertilizer and lead to harmful algal blooms that jeopardize human health. Unfortunately, monitoring these contaminants in the environment is challenging, as sensors are difficult to maintain and expensive to deploy. Ram and his students will work to establish limits of detection for nitrate, nitrite, ammonia, and urea in environmental, industrial, and agricultural samples using swept-source Raman spectroscopy. Swept-source Raman spectroscopy is a method of detecting the presence of a chemical by using a tunable, single mode laser that illuminates a sample. This method does not require costly, high-power lasers or a spectrometer. Ram will then develop and demonstrate a portable system that is capable of achieving chemical specificity in complex, natural environments. Data generated by such a system should help regulate polluters and guide remediation.Kripa Varanasi, a professor in the Department of Mechanical Engineering, and Angela Belcher, the James Mason Crafts Professor and head of the Department of Biological Engineering, will join forces to develop an affordable water disinfection technology that selectively identifies, adsorbs, and kills “superbugs” in domestic and industrial wastewater.Recent research predicts that antibiotic-resistance bacteria (superbugs) will result in $100 trillion in health care expenses and 10 million deaths annually by 2050. The prevalence of superbugs in our water systems has increased due to corroded pipes, contamination, and climate change. Current drinking water disinfection technologies are designed to kill all types of bacteria before human consumption. However, for certain domestic and industrial applications there is a need to protect the good bacteria required for ecological processes that contribute to soil and plant health. Varanasi and Belcher will combine material, biological, process, and system engineering principles to design a sponge-based water disinfection technology that can identify and destroy harmful bacteria while leaving the good bacteria unharmed. By modifying the sponge surface with specialized nanomaterials, their approach will be able to kill superbugs faster and more efficiently. The sponge filters can be deployed under very low pressure, making them an affordable technology, especially in resource-constrained communities.In addition to the 10 seed grant projects, J-WAFS will also fund a research initiative led by Greg Sixt. Sixt is the research manager for climate and food systems at J-WAFS, and the director of the J-WAFS-led Food and Climate Systems Transformation (FACT) Alliance. His project focuses on the Lake Victoria Basin (LVB) of East Africa. The second-largest freshwater lake in the world, Lake Victoria straddles three countries (Uganda, Tanzania, and Kenya) and has a catchment area that encompasses two more (Rwanda and Burundi). Sixt will collaborate with Michael Hauser of the University of Natural Resources and Life Sciences, Vienna, and Paul Kariuki, of the Lake Victoria Basin Commission.The group will study how to adapt food systems to climate change in the Lake Victoria Basin. The basin is facing a range of climate threats that could significantly impact livelihoods and food systems in the expansive region. For example, extreme weather events like droughts and floods are negatively affecting agricultural production and freshwater resources. Across the LVB, current approaches to land and water management are unsustainable and threaten future food and water security. The Lake Victoria Basin Commission (LVBC), a specialized institution of the East African Community, wants to play a more vital role in coordinating transboundary land and water management to support transitions toward more resilient, sustainable, and equitable food systems. The primary goal of this research will be to support the LVBC’s transboundary land and water management efforts, specifically as they relate to sustainability and climate change adaptation in food systems. The research team will work with key stakeholders in Kenya, Uganda, and Tanzania to identify specific capacity needs to facilitate land and water management transitions. The two-year project will produce actionable recommendations to the LVBC. More

  • in

    Improving health outcomes by targeting climate and air pollution simultaneously

    Climate policies are typically designed to reduce greenhouse gas emissions that result from human activities and drive climate change. The largest source of these emissions is the combustion of fossil fuels, which increases atmospheric concentrations of ozone, fine particulate matter (PM2.5) and other air pollutants that pose public health risks. While climate policies may result in lower concentrations of health-damaging air pollutants as a “co-benefit” of reducing greenhouse gas emissions-intensive activities, they are most effective at improving health outcomes when deployed in tandem with geographically targeted air-quality regulations.

    Yet the computer models typically used to assess the likely air quality/health impacts of proposed climate/air-quality policy combinations come with drawbacks for decision-makers. Atmospheric chemistry/climate models can produce high-resolution results, but they are expensive and time-consuming to run. Integrated assessment models can produce results for far less time and money, but produce results at global and regional scales, rendering them insufficiently precise to obtain accurate assessments of air quality/health impacts at the subnational level.

    To overcome these drawbacks, a team of researchers at MIT and the University of California at Davis has developed a climate/air-quality policy assessment tool that is both computationally efficient and location-specific. Described in a new study in the journal ACS Environmental Au, the tool could enable users to obtain rapid estimates of combined policy impacts on air quality/health at more than 1,500 locations around the globe — estimates precise enough to reveal the equity implications of proposed policy combinations within a particular region.

    “The modeling approach described in this study may ultimately allow decision-makers to assess the efficacy of multiple combinations of climate and air-quality policies in reducing the health impacts of air pollution, and to design more effective policies,” says Sebastian Eastham, the study’s lead author and a principal research scientist at the MIT Joint Program on the Science and Policy of Global Change. “It may also be used to determine if a given policy combination would result in equitable health outcomes across a geographical area of interest.”

    To demonstrate the efficiency and accuracy of their policy assessment tool, the researchers showed that outcomes projected by the tool within seconds were consistent with region-specific results from detailed chemistry/climate models that took days or even months to run. While continuing to refine and develop their approaches, they are now working to embed the new tool into integrated assessment models for direct use by policymakers.

    “As decision-makers implement climate policies in the context of other sustainability challenges like air pollution, efficient modeling tools are important for assessment — and new computational techniques allow us to build faster and more accurate tools to provide credible, relevant information to a broader range of users,” says Noelle Selin, a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences, and supervising author of the study. “We are looking forward to further developing such approaches, and to working with stakeholders to ensure that they provide timely, targeted and useful assessments.”

    The study was funded, in part, by the U.S. Environmental Protection Agency and the Biogen Foundation. More

  • in

    A healthy wind

    Nearly 10 percent of today’s electricity in the United States comes from wind power. The renewable energy source benefits climate, air quality, and public health by displacing emissions of greenhouse gases and air pollutants that would otherwise be produced by fossil-fuel-based power plants.

    A new MIT study finds that the health benefits associated with wind power could more than quadruple if operators prioritized turning down output from the most polluting fossil-fuel-based power plants when energy from wind is available.

    In the study, published today in Science Advances, researchers analyzed the hourly activity of wind turbines, as well as the reported emissions from every fossil-fuel-based power plant in the country, between the years 2011 and 2017. They traced emissions across the country and mapped the pollutants to affected demographic populations. They then calculated the regional air quality and associated health costs to each community.

    The researchers found that in 2014, wind power that was associated with state-level policies improved air quality overall, resulting in $2 billion in health benefits across the country. However, only roughly 30 percent of these health benefits reached disadvantaged communities.

    The team further found that if the electricity industry were to reduce the output of the most polluting fossil-fuel-based power plants, rather than the most cost-saving plants, in times of wind-generated power, the overall health benefits could quadruple to $8.4 billion nationwide. However, the results would have a similar demographic breakdown.

    “We found that prioritizing health is a great way to maximize benefits in a widespread way across the U.S., which is a very positive thing. But it suggests it’s not going to address disparities,” says study co-author Noelle Selin, a professor in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences at MIT. “In order to address air pollution disparities, you can’t just focus on the electricity sector or renewables and count on the overall air pollution benefits addressing these real and persistent racial and ethnic disparities. You’ll need to look at other air pollution sources, as well as the underlying systemic factors that determine where plants are sited and where people live.”

    Selin’s co-authors are lead author and former MIT graduate student Minghao Qiu PhD ’21, now at Stanford University, and Corwin Zigler at the University of Texas at Austin.

    Turn-down service

    In their new study, the team looked for patterns between periods of wind power generation and the activity of fossil-fuel-based power plants, to see how regional electricity markets adjusted the output of power plants in response to influxes of renewable energy.

    “One of the technical challenges, and the contribution of this work, is trying to identify which are the power plants that respond to this increasing wind power,” Qiu notes.

    To do so, the researchers compared two historical datasets from the period between 2011 and 2017: an hour-by-hour record of energy output of wind turbines across the country, and a detailed record of emissions measurements from every fossil-fuel-based power plant in the U.S. The datasets covered each of seven major regional electricity markets, each market providing energy to one or multiple states.

    “California and New York are each their own market, whereas the New England market covers around seven states, and the Midwest covers more,” Qiu explains. “We also cover about 95 percent of all the wind power in the U.S.”

    In general, they observed that, in times when wind power was available, markets adjusted by essentially scaling back the power output of natural gas and sub-bituminous coal-fired power plants. They noted that the plants that were turned down were likely chosen for cost-saving reasons, as certain plants were less costly to turn down than others.

    The team then used a sophisticated atmospheric chemistry model to simulate the wind patterns and chemical transport of emissions across the country, and determined where and at what concentrations the emissions generated fine particulates and ozone — two pollutants that are known to damage air quality and human health. Finally, the researchers mapped the general demographic populations across the country, based on U.S. census data, and applied a standard epidemiological approach to calculate a population’s health cost as a result of their pollution exposure.

    This analysis revealed that, in the year 2014, a general cost-saving approach to displacing fossil-fuel-based energy in times of wind energy resulted in $2 billion in health benefits, or savings, across the country. A smaller share of these benefits went to disadvantaged populations, such as communities of color and low-income communities, though this disparity varied by state.

    “It’s a more complex story than we initially thought,” Qiu says. “Certain population groups are exposed to a higher level of air pollution, and those would be low-income people and racial minority groups. What we see is, developing wind power could reduce this gap in certain states but further increase it in other states, depending on which fossil-fuel plants are displaced.”

    Tweaking power

    The researchers then examined how the pattern of emissions and the associated health benefits would change if they prioritized turning down different fossil-fuel-based plants in times of wind-generated power. They tweaked the emissions data to reflect several alternative scenarios: one in which the most health-damaging, polluting power plants are turned down first; and two other scenarios in which plants producing the most sulfur dioxide and carbon dioxide respectively, are first to reduce their output.

    They found that while each scenario increased health benefits overall, and the first scenario in particular could quadruple health benefits, the original disparity persisted: Communities of color and low-income communities still experienced smaller health benefits than more well-off communities.

    “We got to the end of the road and said, there’s no way we can address this disparity by being smarter in deciding which plants to displace,” Selin says.

    Nevertheless, the study can help identify ways to improve the health of the general population, says Julian Marshall, a professor of environmental engineering at the University of Washington.

    “The detailed information provided by the scenarios in this paper can offer a roadmap to electricity-grid operators and to state air-quality regulators regarding which power plants are highly damaging to human health and also are likely to noticeably reduce emissions if wind-generated electricity increases,” says Marshall, who was not involved in the study.

    “One of the things that makes me optimistic about this area is, there’s a lot more attention to environmental justice and equity issues,” Selin concludes. “Our role is to figure out the strategies that are most impactful in addressing those challenges.”

    This work was supported, in part, by the U.S. Environmental Protection Agency, and by the National Institutes of Health. More

  • in

    Coordinating climate and air-quality policies to improve public health

    As America’s largest investment to fight climate change, the Inflation Reduction Act positions the country to reduce its greenhouse gas emissions by an estimated 40 percent below 2005 levels by 2030. But as it edges the United States closer to achieving its international climate commitment, the legislation is also expected to yield significant — and more immediate — improvements in the nation’s health. If successful in accelerating the transition from fossil fuels to clean energy alternatives, the IRA will sharply reduce atmospheric concentrations of fine particulates known to exacerbate respiratory and cardiovascular disease and cause premature deaths, along with other air pollutants that degrade human health. One recent study shows that eliminating air pollution from fossil fuels in the contiguous United States would prevent more than 50,000 premature deaths and avoid more than $600 billion in health costs each year.

    While national climate policies such as those advanced by the IRA can simultaneously help mitigate climate change and improve air quality, their results may vary widely when it comes to improving public health. That’s because the potential health benefits associated with air quality improvements are much greater in some regions and economic sectors than in others. Those benefits can be maximized, however, through a prudent combination of climate and air-quality policies.

    Several past studies have evaluated the likely health impacts of various policy combinations, but their usefulness has been limited due to a reliance on a small set of standard policy scenarios. More versatile tools are needed to model a wide range of climate and air-quality policy combinations and assess their collective effects on air quality and human health. Now researchers at the MIT Joint Program on the Science and Policy of Global Change and MIT Institute for Data, Systems and Society (IDSS) have developed a publicly available, flexible scenario tool that does just that.

    In a study published in the journal Geoscientific Model Development, the MIT team introduces its Tool for Air Pollution Scenarios (TAPS), which can be used to estimate the likely air-quality and health outcomes of a wide range of climate and air-quality policies at the regional, sectoral, and fuel-based level. 

    “This tool can help integrate the siloed sustainability issues of air pollution and climate action,” says the study’s lead author William Atkinson, who recently served as a Biogen Graduate Fellow and research assistant at the IDSS Technology and Policy Program’s (TPP) Research to Policy Engagement Initiative. “Climate action does not guarantee a clean air future, and vice versa — but the issues have similar sources that imply shared solutions if done right.”

    The study’s initial application of TAPS shows that with current air-quality policies and near-term Paris Agreement climate pledges alone, short-term pollution reductions give way to long-term increases — given the expected growth of emissions-intensive industrial and agricultural processes in developing regions. More ambitious climate and air-quality policies could be complementary, each reducing different pollutants substantially to give tremendous near- and long-term health benefits worldwide.

    “The significance of this work is that we can more confidently identify the long-term emission reduction strategies that also support air quality improvements,” says MIT Joint Program Deputy Director C. Adam Schlosser, a co-author of the study. “This is a win-win for setting climate targets that are also healthy targets.”

    TAPS projects air quality and health outcomes based on three integrated components: a recent global inventory of detailed emissions resulting from human activities (e.g., fossil fuel combustion, land-use change, industrial processes); multiple scenarios of emissions-generating human activities between now and the year 2100, produced by the MIT Economic Projection and Policy Analysis model; and emissions intensity (emissions per unit of activity) scenarios based on recent data from the Greenhouse Gas and Air Pollution Interactions and Synergies model.

    “We see the climate crisis as a health crisis, and believe that evidence-based approaches are key to making the most of this historic investment in the future, particularly for vulnerable communities,” says Johanna Jobin, global head of corporate reputation and responsibility at Biogen. “The scientific community has spoken with unanimity and alarm that not all climate-related actions deliver equal health benefits. We’re proud of our collaboration with the MIT Joint Program to develop this tool that can be used to bridge research-to-policy gaps, support policy decisions to promote health among vulnerable communities, and train the next generation of scientists and leaders for far-reaching impact.”

    The tool can inform decision-makers about a wide range of climate and air-quality policies. Policy scenarios can be applied to specific regions, sectors, or fuels to investigate policy combinations at a more granular level, or to target short-term actions with high-impact benefits.

    TAPS could be further developed to account for additional emissions sources and trends.

    “Our new tool could be used to examine a large range of both climate and air quality scenarios. As the framework is expanded, we can add detail for specific regions, as well as additional pollutants such as air toxics,” says study supervising co-author Noelle Selin, professor at IDSS and the MIT Department of Earth, Atmospheric and Planetary Sciences, and director of TPP.    

    This research was supported by the U.S. Environmental Protection Agency and its Science to Achieve Results (STAR) program; Biogen; TPP’s Leading Technology and Policy Initiative; and TPP’s Research to Policy Engagement Initiative. More

  • in

    In-home wireless device tracks disease progression in Parkinson’s patients

    Parkinson’s disease is the fastest-growing neurological disease, now affecting more than 10 million people worldwide, yet clinicians still face huge challenges in tracking its severity and progression.

    Clinicians typically evaluate patients by testing their motor skills and cognitive functions during clinic visits. These semisubjective measurements are often skewed by outside factors — perhaps a patient is tired after a long drive to the hospital. More than 40 percent of individuals with Parkinson’s are never treated by a neurologist or Parkinson’s specialist, often because they live too far from an urban center or have difficulty traveling.

    In an effort to address these problems, researchers from MIT and elsewhere demonstrated an in-home device that can monitor a patient’s movement and gait speed, which can be used to evaluate Parkinson’s severity, the progression of the disease, and the patient’s response to medication.

    The device, which is about the size of a Wi-Fi router, gathers data passively using radio signals that reflect off the patient’s body as they move around their home. The patient does not need to wear a gadget or change their behavior. (A recent study, for example, showed that this type of device could be used to detect Parkinson’s from a person’s breathing patterns while sleeping.)

    The researchers used these devices to conduct a one-year at-home study with 50 participants. They showed that, by using machine-learning algorithms to analyze the troves of data they passively gathered (more than 200,000 gait speed measurements), a clinician could track Parkinson’s progression and medication response more effectively than they would with periodic, in-clinic evaluations.

    “By being able to have a device in the home that can monitor a patient and tell the doctor remotely about the progression of the disease, and the patient’s medication response so they can attend to the patient even if the patient can’t come to the clinic — now they have real, reliable information — that actually goes a long way toward improving equity and access,” says senior author Dina Katabi, the Thuan and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science (EECS), and a principle investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Jameel Clinic.

    The co-lead authors are EECS graduate students Yingcheng Liu and Guo Zhang. The research is published today in Science Translational Medicine.

    A human radar

    This work utilizes a wireless device previously developed in the Katabi lab that analyzes radio signals that bounce off people’s bodies. It transmits signals that use a tiny fraction of the power of a Wi-Fi router — these super-low-power signals don’t interfere with other wireless devices in the home. While radio signals pass through walls and other solid objects, they are reflected off humans due to the water in our bodies.  

    This creates a “human radar” that can track the movement of a person in a room. Radio waves always travel at the same speed, so the length of time it takes the signals to reflect back to the device indicates how the person is moving.

    The device incorporates a machine-learning classifier that can pick out the precise radio signals reflected off the patient even when there are other people moving around the room. Advanced algorithms use these movement data to compute gait speed — how fast the person is walking.

    Because the device operates in the background and runs all day, every day, it can collect a massive amount of data. The researchers wanted to see if they could apply machine learning to these datasets to gain insights about the disease over time.

    They gathered 50 participants, 34 of whom had Parkinson’s, and conducted a one-year study of in-home gait measurements Through the study, the researchers collected more than 200,000 individual measurements that they averaged to smooth out variability due to the conditions irrelevant to the disease. (For example, a patient may hurry up to answer an alarm or walk slower when talking on the phone.)

    They used statistical methods to analyze the data and found that in-home gait speed can be used to effectively track Parkinson’s progression and severity. For instance, they showed that gait speed declined almost twice as fast for individuals with Parkinson’s, compared to those without. 

    “Monitoring the patient continuously as they move around the room enabled us to get really good measurements of their gait speed. And with so much data, we were able to perform aggregation that allowed us to see very small differences,” Zhang says.

    Better, faster results

    Drilling down on these variabilities offered some key insights. For instance, the researchers showed that daily fluctuations in a patient’s walking speed correspond with how they are responding to their medication — walking speed may improve after a dose and then begin to decline after a few hours, as the medication impact wears off.

    “This enables us to objectively measure how your mobility responds to your medication. Previously, this was very cumbersome to do because this medication effect could only be measured by having the patient keep a journal,” Liu says.

    A clinician could use these data to adjust medication dosage more effectively and accurately. This is especially important since drugs used to treat disease symptoms can cause serious side effects if the patient receives too much.

    The researchers were able to demonstrate statistically significant results regarding Parkinson’s progression after studying 50 people for just one year. By contrast, an often-cited study by the Michael J. Fox Foundation involved more than 500 individuals and monitored them for more than five years, Katabi says.

    “For a pharmaceutical company or a biotech company trying to develop medicines for this disease, this could greatly reduce the burden and cost and speed up the development of new therapies,” she adds.

    Katabi credits much of the study’s success to the dedicated team of scientists and clinicians who worked together to tackle the many difficulties that arose along the way. For one, they began the study before the Covid-19 pandemic, so team members initially visited people’s homes to set up the devices. When that was no longer possible, they developed a user-friendly phone app to remotely help participants as they deployed the device at home.

    Through the course of the study, they learned to automate processes and reduce effort, especially for the participants and clinical team.

    This knowledge will prove useful as they look to deploy devices in at-home studies of other neurological disorders, such as Alzheimer’s, ALS, and Huntington’s. They also want to explore how these methods could be used, in conjunction with other work from the Katabi lab showing that Parkinson’s can be diagnosed by monitoring breathing, to collect a holistic set of markers that could diagnose the disease early and then be used to track and treat it.

    “This radio-wave sensor can enable more care (and research) to migrate from hospitals to the home where it is most desired and needed,” says Ray Dorsey, a professor of neurology at the University of Rochester Medical Center, co-author of Ending Parkinson’s, and a co-author of this research paper. “Its potential is just beginning to be seen. We are moving toward a day where we can diagnose and predict disease at home. In the future, we may even be able to predict and ideally prevent events like falls and heart attacks.”

    This work is supported, in part, by the National Institutes of Health and the Michael J. Fox Foundation. More

  • in

    Artificial intelligence predicts patients’ race from their medical images

    The miseducation of algorithms is a critical problem; when artificial intelligence mirrors unconscious thoughts, racism, and biases of the humans who generated these algorithms, it can lead to serious harm. Computer programs, for example, have wrongly flagged Black defendants as twice as likely to reoffend as someone who’s white. When an AI used cost as a proxy for health needs, it falsely named Black patients as healthier than equally sick white ones, as less money was spent on them. Even AI used to write a play relied on using harmful stereotypes for casting. 

    Removing sensitive features from the data seems like a viable tweak. But what happens when it’s not enough? 

    Examples of bias in natural language processing are boundless — but MIT scientists have investigated another important, largely underexplored modality: medical images. Using both private and public datasets, the team found that AI can accurately predict self-reported race of patients from medical images alone. Using imaging data of chest X-rays, limb X-rays, chest CT scans, and mammograms, the team trained a deep learning model to identify race as white, Black, or Asian — even though the images themselves contained no explicit mention of the patient’s race. This is a feat even the most seasoned physicians cannot do, and it’s not clear how the model was able to do this. 

    In an attempt to tease out and make sense of the enigmatic “how” of it all, the researchers ran a slew of experiments. To investigate possible mechanisms of race detection, they looked at variables like differences in anatomy, bone density, resolution of images — and many more, and the models still prevailed with high ability to detect race from chest X-rays. “These results were initially confusing, because the members of our research team could not come anywhere close to identifying a good proxy for this task,” says paper co-author Marzyeh Ghassemi, an assistant professor in the MIT Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science (IMES), who is an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and of the MIT Jameel Clinic. “Even when you filter medical images past where the images are recognizable as medical images at all, deep models maintain a very high performance. That is concerning because superhuman capacities are generally much more difficult to control, regulate, and prevent from harming people.”

    In a clinical setting, algorithms can help tell us whether a patient is a candidate for chemotherapy, dictate the triage of patients, or decide if a movement to the ICU is necessary. “We think that the algorithms are only looking at vital signs or laboratory tests, but it’s possible they’re also looking at your race, ethnicity, sex, whether you’re incarcerated or not — even if all of that information is hidden,” says paper co-author Leo Anthony Celi, principal research scientist in IMES at MIT and associate professor of medicine at Harvard Medical School. “Just because you have representation of different groups in your algorithms, that doesn’t guarantee it won’t perpetuate or magnify existing disparities and inequities. Feeding the algorithms with more data with representation is not a panacea. This paper should make us pause and truly reconsider whether we are ready to bring AI to the bedside.” 

    The study, “AI recognition of patient race in medical imaging: a modeling study,” was published in Lancet Digital Health on May 11. Celi and Ghassemi wrote the paper alongside 20 other authors in four countries.

    To set up the tests, the scientists first showed that the models were able to predict race across multiple imaging modalities, various datasets, and diverse clinical tasks, as well as across a range of academic centers and patient populations in the United States. They used three large chest X-ray datasets, and tested the model on an unseen subset of the dataset used to train the model and a completely different one. Next, they trained the racial identity detection models for non-chest X-ray images from multiple body locations, including digital radiography, mammography, lateral cervical spine radiographs, and chest CTs to see whether the model’s performance was limited to chest X-rays. 

    The team covered many bases in an attempt to explain the model’s behavior: differences in physical characteristics between different racial groups (body habitus, breast density), disease distribution (previous studies have shown that Black patients have a higher incidence for health issues like cardiac disease), location-specific or tissue specific differences, effects of societal bias and environmental stress, the ability of deep learning systems to detect race when multiple demographic and patient factors were combined, and if specific image regions contributed to recognizing race. 

    What emerged was truly staggering: The ability of the models to predict race from diagnostic labels alone was much lower than the chest X-ray image-based models. 

    For example, the bone density test used images where the thicker part of the bone appeared white, and the thinner part appeared more gray or translucent. Scientists assumed that since Black people generally have higher bone mineral density, the color differences helped the AI models to detect race. To cut that off, they clipped the images with a filter, so the model couldn’t color differences. It turned out that cutting off the color supply didn’t faze the model — it still could accurately predict races. (The “Area Under the Curve” value, meaning the measure of the accuracy of a quantitative diagnostic test, was 0.94–0.96). As such, the learned features of the model appeared to rely on all regions of the image, meaning that controlling this type of algorithmic behavior presents a messy, challenging problem. 

    The scientists acknowledge limited availability of racial identity labels, which caused them to focus on Asian, Black, and white populations, and that their ground truth was a self-reported detail. Other forthcoming work will include potentially looking at isolating different signals before image reconstruction, because, as with bone density experiments, they couldn’t account for residual bone tissue that was on the images. 

    Notably, other work by Ghassemi and Celi led by MIT student Hammaad Adam has found that models can also identify patient self-reported race from clinical notes even when those notes are stripped of explicit indicators of race. Just as in this work, human experts are not able to accurately predict patient race from the same redacted clinical notes.

    “We need to bring social scientists into the picture. Domain experts, which are usually the clinicians, public health practitioners, computer scientists, and engineers are not enough. Health care is a social-cultural problem just as much as it’s a medical problem. We need another group of experts to weigh in and to provide input and feedback on how we design, develop, deploy, and evaluate these algorithms,” says Celi. “We need to also ask the data scientists, before any exploration of the data, are there disparities? Which patient groups are marginalized? What are the drivers of those disparities? Is it access to care? Is it from the subjectivity of the care providers? If we don’t understand that, we won’t have a chance of being able to identify the unintended consequences of the algorithms, and there’s no way we’ll be able to safeguard the algorithms from perpetuating biases.”

    “The fact that algorithms ‘see’ race, as the authors convincingly document, can be dangerous. But an important and related fact is that, when used carefully, algorithms can also work to counter bias,” says Ziad Obermeyer, associate professor at the University of California at Berkeley, whose research focuses on AI applied to health. “In our own work, led by computer scientist Emma Pierson at Cornell, we show that algorithms that learn from patients’ pain experiences can find new sources of knee pain in X-rays that disproportionately affect Black patients — and are disproportionately missed by radiologists. So just like any tool, algorithms can be a force for evil or a force for good — which one depends on us, and the choices we make when we build algorithms.”

    The work is supported, in part, by the National Institutes of Health. More